Test Report: Hyper-V_Windows 18925

                    
                      9bd6871c0608907332c6bb982838c8ee113ad42f:2024-05-20:34544
                    
                

Test fail (23/205)

x
+
TestAddons/parallel/Registry (82.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.9917ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zwwpn" [3869227a-4098-41f0-bdca-bbf64dc302a8] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0229244s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-c6b67" [f7feab3d-2695-4f4a-92be-d20eb369c7fa] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00883s
addons_test.go:340: (dbg) Run:  kubectl --context addons-363100 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-363100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-363100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.9023713s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 ip: (2.5661959s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0520 03:28:28.810509    1448 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-363100 ip"
2024/05/20 03:28:31 [DEBUG] GET http://172.25.240.77:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable registry --alsologtostderr -v=1: (17.38217s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-363100 -n addons-363100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-363100 -n addons-363100: (13.4721045s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 logs -n 25: (9.8220028s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | -p download-only-552800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| delete  | -p download-only-552800                                                                     | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| start   | -o=json --download-only                                                                     | download-only-847500 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT |                     |
	|         | -p download-only-847500                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| delete  | -p download-only-847500                                                                     | download-only-847500 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| delete  | -p download-only-552800                                                                     | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| delete  | -p download-only-847500                                                                     | download-only-847500 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-061100 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT |                     |
	|         | binary-mirror-061100                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:60375                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-061100                                                                     | binary-mirror-061100 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| addons  | enable dashboard -p                                                                         | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT |                     |
	|         | addons-363100                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT |                     |
	|         | addons-363100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-363100 --wait=true                                                                | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:28 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-363100 addons                                                                        | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT | 20 May 24 03:28 PDT |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-363100 ssh cat                                                                       | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT | 20 May 24 03:28 PDT |
	|         | /opt/local-path-provisioner/pvc-94e3d7fc-7011-49fb-8aa0-2b4343d236b6_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-363100 ip                                                                            | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT | 20 May 24 03:28 PDT |
	| addons  | addons-363100 addons disable                                                                | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT | 20 May 24 03:28 PDT |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-363100 addons disable                                                                | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-363100 addons disable                                                                | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:28 PDT | 20 May 24 03:28 PDT |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-363100 addons                                                                        | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:29 PDT |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-363100        | minikube1\jenkins | v1.33.1 | 20 May 24 03:29 PDT |                     |
	|         | addons-363100                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:21:31
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:21:31.839985   12008 out.go:291] Setting OutFile to fd 748 ...
	I0520 03:21:31.840980   12008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:31.840980   12008 out.go:304] Setting ErrFile to fd 716...
	I0520 03:21:31.840980   12008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:31.872803   12008 out.go:298] Setting JSON to false
	I0520 03:21:31.877308   12008 start.go:129] hostinfo: {"hostname":"minikube1","uptime":488,"bootTime":1716200003,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:21:31.877308   12008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:21:31.881895   12008 out.go:177] * [addons-363100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:21:31.885958   12008 notify.go:220] Checking for updates...
	I0520 03:21:31.888523   12008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:21:31.891086   12008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:21:31.894095   12008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:21:31.897117   12008 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:21:31.899091   12008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:21:31.902086   12008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:21:37.801742   12008 out.go:177] * Using the hyperv driver based on user configuration
	I0520 03:21:37.807323   12008 start.go:297] selected driver: hyperv
	I0520 03:21:37.807388   12008 start.go:901] validating driver "hyperv" against <nil>
	I0520 03:21:37.807388   12008 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:21:37.858362   12008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:21:37.859626   12008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:21:37.859626   12008 cni.go:84] Creating CNI manager for ""
	I0520 03:21:37.859626   12008 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:21:37.859626   12008 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:21:37.859626   12008 start.go:340] cluster config:
	{Name:addons-363100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-363100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:21:37.860291   12008 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:21:37.864884   12008 out.go:177] * Starting "addons-363100" primary control-plane node in "addons-363100" cluster
	I0520 03:21:37.867078   12008 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:21:37.867678   12008 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 03:21:37.867678   12008 cache.go:56] Caching tarball of preloaded images
	I0520 03:21:37.867678   12008 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 03:21:37.868197   12008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:21:37.868447   12008 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\config.json ...
	I0520 03:21:37.869042   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\config.json: {Name:mk115ee63ac7442ec3a647596664b3103b93f008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:21:37.870220   12008 start.go:360] acquireMachinesLock for addons-363100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:21:37.870220   12008 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-363100"
	I0520 03:21:37.870852   12008 start.go:93] Provisioning new machine with config: &{Name:addons-363100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-363100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:21:37.870905   12008 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 03:21:37.874329   12008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 03:21:37.874426   12008 start.go:159] libmachine.API.Create for "addons-363100" (driver="hyperv")
	I0520 03:21:37.874426   12008 client.go:168] LocalClient.Create starting
	I0520 03:21:37.875004   12008 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 03:21:38.031338   12008 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 03:21:38.159052   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 03:21:40.385592   12008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 03:21:40.386760   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:40.386833   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 03:21:42.183993   12008 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 03:21:42.183993   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:42.185166   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:21:43.705600   12008 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:21:43.706141   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:43.706372   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:21:47.482663   12008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:21:47.482714   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:47.484754   12008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 03:21:47.948794   12008 main.go:141] libmachine: Creating SSH key...
	I0520 03:21:48.115598   12008 main.go:141] libmachine: Creating VM...
	I0520 03:21:48.115598   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:21:51.003982   12008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:21:51.003982   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:51.003982   12008 main.go:141] libmachine: Using switch "Default Switch"
	I0520 03:21:51.003982   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:21:52.776728   12008 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:21:52.776728   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:52.776728   12008 main.go:141] libmachine: Creating VHD
	I0520 03:21:52.777498   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 03:21:56.634386   12008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 23B36C1D-334D-4B19-9186-A950224A70A1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 03:21:56.635239   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:56.635342   12008 main.go:141] libmachine: Writing magic tar header
	I0520 03:21:56.635614   12008 main.go:141] libmachine: Writing SSH key tar header
	I0520 03:21:56.644383   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 03:21:59.837764   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:21:59.838257   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:21:59.838329   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\disk.vhd' -SizeBytes 20000MB
	I0520 03:22:02.474132   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:02.474183   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:02.474183   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-363100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0520 03:22:06.250917   12008 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-363100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 03:22:06.251161   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:06.251161   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-363100 -DynamicMemoryEnabled $false
	I0520 03:22:08.551782   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:08.551782   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:08.552394   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-363100 -Count 2
	I0520 03:22:10.777025   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:10.777025   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:10.777025   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-363100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\boot2docker.iso'
	I0520 03:22:13.419068   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:13.419068   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:13.419068   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-363100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\disk.vhd'
	I0520 03:22:16.152702   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:16.152702   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:16.153027   12008 main.go:141] libmachine: Starting VM...
	I0520 03:22:16.153082   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-363100
	I0520 03:22:19.468251   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:19.469020   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:19.469020   12008 main.go:141] libmachine: Waiting for host to start...
	I0520 03:22:19.469020   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:21.930080   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:21.930080   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:21.931139   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:24.600641   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:24.601177   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:25.607963   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:27.977870   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:27.977870   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:27.977870   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:30.683686   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:30.683759   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:31.688711   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:34.033304   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:34.033729   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:34.033789   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:36.749642   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:36.749854   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:37.750963   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:40.044112   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:40.045073   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:40.045073   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:42.631964   12008 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:22:42.631964   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:43.640819   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:45.944590   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:45.944590   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:45.945693   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:48.550487   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:22:48.550487   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:48.550487   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:50.745188   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:50.745188   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:50.745188   12008 machine.go:94] provisionDockerMachine start ...
	I0520 03:22:50.745188   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:52.924802   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:52.925389   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:52.925389   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:22:55.528001   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:22:55.528045   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:55.533191   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:22:55.544865   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:22:55.544865   12008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:22:55.684206   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 03:22:55.684206   12008 buildroot.go:166] provisioning hostname "addons-363100"
	I0520 03:22:55.684206   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:22:57.844923   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:22:57.844981   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:22:57.844981   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:00.501929   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:00.502936   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:00.508980   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:00.509768   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:00.509768   12008 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-363100 && echo "addons-363100" | sudo tee /etc/hostname
	I0520 03:23:00.663228   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-363100
	
	I0520 03:23:00.663228   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:02.858910   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:02.859468   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:02.859545   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:05.435779   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:05.435779   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:05.444375   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:05.444375   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:05.444375   12008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-363100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-363100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-363100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:23:05.591263   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:23:05.591263   12008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 03:23:05.591263   12008 buildroot.go:174] setting up certificates
	I0520 03:23:05.591263   12008 provision.go:84] configureAuth start
	I0520 03:23:05.591263   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:07.757761   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:07.758645   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:07.758645   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:10.368855   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:10.368855   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:10.368855   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:12.516260   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:12.516260   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:12.516543   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:15.159070   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:15.159070   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:15.159366   12008 provision.go:143] copyHostCerts
	I0520 03:23:15.159835   12008 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 03:23:15.161224   12008 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 03:23:15.162420   12008 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 03:23:15.163249   12008 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-363100 san=[127.0.0.1 172.25.240.77 addons-363100 localhost minikube]
	I0520 03:23:15.280190   12008 provision.go:177] copyRemoteCerts
	I0520 03:23:15.292205   12008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:23:15.292205   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:17.476873   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:17.476873   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:17.476873   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:20.018683   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:20.019500   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:20.019718   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:23:20.119763   12008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8275559s)
	I0520 03:23:20.120481   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:23:20.166724   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 03:23:20.209018   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 03:23:20.256533   12008 provision.go:87] duration metric: took 14.6652621s to configureAuth
	I0520 03:23:20.256533   12008 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:23:20.257437   12008 config.go:182] Loaded profile config "addons-363100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:23:20.257437   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:22.462504   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:22.462504   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:22.463085   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:25.072358   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:25.072928   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:25.078205   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:25.078729   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:25.078729   12008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:23:25.209598   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:23:25.209598   12008 buildroot.go:70] root file system type: tmpfs
	I0520 03:23:25.209598   12008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:23:25.209598   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:27.378264   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:27.378864   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:27.379104   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:29.934433   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:29.934541   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:29.940675   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:29.940675   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:29.941201   12008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:23:30.110245   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:23:30.110245   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:32.322108   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:32.322162   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:32.322162   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:34.905505   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:34.905562   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:34.911668   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:34.911668   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:34.911668   12008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:23:37.081423   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 03:23:37.081423   12008 machine.go:97] duration metric: took 46.336213s to provisionDockerMachine
	I0520 03:23:37.081423   12008 client.go:171] duration metric: took 1m59.2069381s to LocalClient.Create
	I0520 03:23:37.081423   12008 start.go:167] duration metric: took 1m59.2069381s to libmachine.API.Create "addons-363100"
	I0520 03:23:37.081423   12008 start.go:293] postStartSetup for "addons-363100" (driver="hyperv")
	I0520 03:23:37.081423   12008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:23:37.095918   12008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:23:37.096114   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:39.241494   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:39.242154   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:39.242211   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:41.817652   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:41.817885   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:41.817936   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:23:41.934616   12008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8386132s)
	I0520 03:23:41.947693   12008 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:23:41.956063   12008 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 03:23:41.956063   12008 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 03:23:41.956688   12008 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 03:23:41.956688   12008 start.go:296] duration metric: took 4.8752638s for postStartSetup
	I0520 03:23:41.959064   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:44.176249   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:44.176249   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:44.176249   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:46.806127   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:46.806127   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:46.806564   12008 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\config.json ...
	I0520 03:23:46.809226   12008 start.go:128] duration metric: took 2m8.9382606s to createHost
	I0520 03:23:46.809827   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:49.112794   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:49.113571   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:49.113571   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:51.802464   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:51.802464   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:51.808530   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:51.809272   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:51.809335   12008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 03:23:51.946560   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716200631.946216970
	
	I0520 03:23:51.946560   12008 fix.go:216] guest clock: 1716200631.946216970
	I0520 03:23:51.946560   12008 fix.go:229] Guest: 2024-05-20 03:23:51.94621697 -0700 PDT Remote: 2024-05-20 03:23:46.809226 -0700 PDT m=+135.062357101 (delta=5.13699097s)
	I0520 03:23:51.946560   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:54.165884   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:54.165884   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:54.165884   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:23:56.813806   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:23:56.813806   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:56.819786   12008 main.go:141] libmachine: Using SSH client type: native
	I0520 03:23:56.820636   12008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.77 22 <nil> <nil>}
	I0520 03:23:56.820636   12008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716200631
	I0520 03:23:56.966896   12008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 10:23:51 UTC 2024
	
	I0520 03:23:56.966896   12008 fix.go:236] clock set: Mon May 20 10:23:51 UTC 2024
	 (err=<nil>)
	I0520 03:23:56.966896   12008 start.go:83] releasing machines lock for "addons-363100", held for 2m19.0966125s
	I0520 03:23:56.966896   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:23:59.582286   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:23:59.582286   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:23:59.582286   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:24:02.454743   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:24:02.455137   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:24:02.459641   12008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:24:02.459871   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:02.469767   12008 ssh_runner.go:195] Run: cat /version.json
	I0520 03:24:02.470762   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:04.733757   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:24:04.733965   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:24:04.733757   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:24:04.733965   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:24:04.733965   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:24:04.733965   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:24:07.445716   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:24:07.445716   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:24:07.445716   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:24:07.465143   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:24:07.465649   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:24:07.465858   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:24:07.665772   12008 ssh_runner.go:235] Completed: cat /version.json: (5.1950071s)
	I0520 03:24:07.665772   12008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2060521s)
	W0520 03:24:07.666138   12008 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 03:24:07.680124   12008 ssh_runner.go:195] Run: systemctl --version
	I0520 03:24:07.701892   12008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:24:07.712577   12008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:24:07.725818   12008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 03:24:07.758154   12008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:24:07.758154   12008 start.go:494] detecting cgroup driver to use...
	I0520 03:24:07.758154   12008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:24:07.805923   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 03:24:07.838673   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:24:07.860842   12008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:24:07.873528   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:24:07.907900   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:24:07.937686   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:24:07.969072   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:24:08.001645   12008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:24:08.035182   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:24:08.067828   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:24:08.102808   12008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:24:08.135743   12008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:24:08.166020   12008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:24:08.198043   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:08.392784   12008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:24:08.424563   12008 start.go:494] detecting cgroup driver to use...
	I0520 03:24:08.440572   12008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:24:08.476223   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:24:08.513480   12008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:24:08.571764   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:24:08.606061   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:24:08.643994   12008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 03:24:08.712490   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:24:08.738710   12008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:24:08.787413   12008 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:24:08.806318   12008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:24:08.822597   12008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:24:08.866909   12008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:24:09.065580   12008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:24:09.271082   12008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:24:09.271656   12008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:24:09.321547   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:09.515089   12008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:24:12.030955   12008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5158647s)
	I0520 03:24:12.043743   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:24:12.080844   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:24:12.118840   12008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:24:12.323456   12008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:24:12.516816   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:12.707389   12008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:24:12.753390   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:24:12.792555   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:12.987154   12008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:24:13.102291   12008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:24:13.115833   12008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:24:13.124420   12008 start.go:562] Will wait 60s for crictl version
	I0520 03:24:13.138470   12008 ssh_runner.go:195] Run: which crictl
	I0520 03:24:13.157041   12008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:24:13.218691   12008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 03:24:13.228941   12008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:24:13.274112   12008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:24:13.318206   12008 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 03:24:13.318405   12008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 03:24:13.322462   12008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 03:24:13.322462   12008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 03:24:13.322462   12008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 03:24:13.322462   12008 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 03:24:13.325421   12008 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 03:24:13.325421   12008 ip.go:210] interface addr: 172.25.240.1/20
	I0520 03:24:13.339007   12008 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 03:24:13.345375   12008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:24:13.374297   12008 kubeadm.go:877] updating cluster {Name:addons-363100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-363100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.240.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 03:24:13.374597   12008 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:24:13.384779   12008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:24:13.406185   12008 docker.go:685] Got preloaded images: 
	I0520 03:24:13.406185   12008 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 03:24:13.419733   12008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:24:13.453826   12008 ssh_runner.go:195] Run: which lz4
	I0520 03:24:13.482414   12008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 03:24:13.489520   12008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:24:13.489621   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 03:24:15.201987   12008 docker.go:649] duration metric: took 1.7333515s to copy over tarball
	I0520 03:24:15.216666   12008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 03:24:21.121949   12008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.9052158s)
	I0520 03:24:21.122006   12008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 03:24:21.191524   12008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:24:21.211401   12008 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 03:24:21.266600   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:21.491673   12008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:24:27.016102   12008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.5244253s)
	I0520 03:24:27.030293   12008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:24:27.059576   12008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:24:27.059720   12008 cache_images.go:84] Images are preloaded, skipping loading
	I0520 03:24:27.059720   12008 kubeadm.go:928] updating node { 172.25.240.77 8443 v1.30.1 docker true true} ...
	I0520 03:24:27.059974   12008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-363100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.240.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-363100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 03:24:27.069815   12008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 03:24:27.105253   12008 cni.go:84] Creating CNI manager for ""
	I0520 03:24:27.105833   12008 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:24:27.105874   12008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 03:24:27.105874   12008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.240.77 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-363100 NodeName:addons-363100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.240.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.240.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 03:24:27.106182   12008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.240.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-363100"
	  kubeletExtraArgs:
	    node-ip: 172.25.240.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.240.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 03:24:27.120176   12008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 03:24:27.136823   12008 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 03:24:27.150590   12008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 03:24:27.167986   12008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 03:24:27.197075   12008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 03:24:27.226307   12008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0520 03:24:27.272991   12008 ssh_runner.go:195] Run: grep 172.25.240.77	control-plane.minikube.internal$ /etc/hosts
	I0520 03:24:27.279783   12008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.240.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:24:27.320767   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:27.512450   12008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:24:27.543006   12008 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100 for IP: 172.25.240.77
	I0520 03:24:27.543006   12008 certs.go:194] generating shared ca certs ...
	I0520 03:24:27.543006   12008 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:27.543542   12008 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 03:24:27.656524   12008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0520 03:24:27.656524   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:27.658489   12008 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0520 03:24:27.658489   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:27.659525   12008 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 03:24:27.818244   12008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0520 03:24:27.818244   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:27.820315   12008 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0520 03:24:27.820315   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:27.821292   12008 certs.go:256] generating profile certs ...
	I0520 03:24:27.822296   12008 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.key
	I0520 03:24:27.822296   12008 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt with IP's: []
	I0520 03:24:28.198944   12008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt ...
	I0520 03:24:28.198944   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: {Name:mk6a41173afd6fb35936df47ec2acb22b1436df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.200010   12008 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.key ...
	I0520 03:24:28.200010   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.key: {Name:mke91761993e966edd801d7954c4654f9314db7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.201095   12008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key.fbd30952
	I0520 03:24:28.202144   12008 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt.fbd30952 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.240.77]
	I0520 03:24:28.399012   12008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt.fbd30952 ...
	I0520 03:24:28.399012   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt.fbd30952: {Name:mk72469392e992deaf55716fdabfe89809da8e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.399889   12008 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key.fbd30952 ...
	I0520 03:24:28.399889   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key.fbd30952: {Name:mkcfa9f9fdc5f2f203f2655bc5f92922e367add4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.401489   12008 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt.fbd30952 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt
	I0520 03:24:28.412636   12008 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key.fbd30952 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key
	I0520 03:24:28.413621   12008 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.key
	I0520 03:24:28.414319   12008 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.crt with IP's: []
	I0520 03:24:28.617456   12008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.crt ...
	I0520 03:24:28.618416   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.crt: {Name:mkab750fdab0baa7c7076818e1b5821e54cc430d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.619718   12008 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.key ...
	I0520 03:24:28.619718   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.key: {Name:mkc337859dc0228b459c8b97cd59b9008592c05d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:28.630729   12008 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 03:24:28.639346   12008 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 03:24:28.645328   12008 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 03:24:28.652064   12008 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 03:24:28.660460   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 03:24:28.709932   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 03:24:28.754906   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 03:24:28.800392   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 03:24:28.845766   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 03:24:28.886806   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 03:24:28.933854   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 03:24:28.976325   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 03:24:29.022123   12008 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 03:24:29.081731   12008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 03:24:29.137809   12008 ssh_runner.go:195] Run: openssl version
	I0520 03:24:29.160945   12008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 03:24:29.195496   12008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:24:29.202743   12008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:24:29.216824   12008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:24:29.246697   12008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 03:24:29.277055   12008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:24:29.283478   12008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 03:24:29.283478   12008 kubeadm.go:391] StartCluster: {Name:addons-363100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-363100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.240.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:24:29.293659   12008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:24:29.329609   12008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 03:24:29.355304   12008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:24:29.391517   12008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:24:29.409895   12008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:24:29.409895   12008 kubeadm.go:156] found existing configuration files:
	
	I0520 03:24:29.423968   12008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 03:24:29.440004   12008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:24:29.452770   12008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:24:29.482314   12008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 03:24:29.499981   12008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:24:29.514424   12008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:24:29.547648   12008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 03:24:29.565652   12008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:24:29.580428   12008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:24:29.609430   12008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 03:24:29.628263   12008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:24:29.640663   12008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:24:29.658255   12008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 03:24:29.906922   12008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 03:24:42.691207   12008 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 03:24:42.691372   12008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 03:24:42.691635   12008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 03:24:42.691907   12008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 03:24:42.692143   12008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 03:24:42.692341   12008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:24:42.698832   12008 out.go:204]   - Generating certificates and keys ...
	I0520 03:24:42.699169   12008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 03:24:42.699268   12008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 03:24:42.699462   12008 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 03:24:42.699662   12008 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 03:24:42.699801   12008 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 03:24:42.699850   12008 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 03:24:42.699850   12008 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 03:24:42.699850   12008 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-363100 localhost] and IPs [172.25.240.77 127.0.0.1 ::1]
	I0520 03:24:42.700393   12008 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 03:24:42.700734   12008 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-363100 localhost] and IPs [172.25.240.77 127.0.0.1 ::1]
	I0520 03:24:42.700734   12008 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 03:24:42.700734   12008 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 03:24:42.700734   12008 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 03:24:42.701279   12008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:24:42.701524   12008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 03:24:42.701524   12008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 03:24:42.701524   12008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 03:24:42.701524   12008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:24:42.702157   12008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:24:42.702408   12008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:24:42.702544   12008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:24:42.705033   12008 out.go:204]   - Booting up control plane ...
	I0520 03:24:42.705862   12008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:24:42.705922   12008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:24:42.705922   12008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:24:42.705922   12008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:24:42.706581   12008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:24:42.706700   12008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 03:24:42.706900   12008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 03:24:42.706900   12008 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 03:24:42.707394   12008 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001077306s
	I0520 03:24:42.707572   12008 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 03:24:42.707724   12008 kubeadm.go:309] [api-check] The API server is healthy after 7.002948727s
	I0520 03:24:42.707891   12008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 03:24:42.708162   12008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 03:24:42.708310   12008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 03:24:42.708497   12008 kubeadm.go:309] [mark-control-plane] Marking the node addons-363100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 03:24:42.708497   12008 kubeadm.go:309] [bootstrap-token] Using token: len5zc.qhmxvel8tyzi09us
	I0520 03:24:42.710814   12008 out.go:204]   - Configuring RBAC rules ...
	I0520 03:24:42.711819   12008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 03:24:42.711819   12008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 03:24:42.711819   12008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 03:24:42.712926   12008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 03:24:42.713186   12008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 03:24:42.713576   12008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 03:24:42.713630   12008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 03:24:42.713630   12008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 03:24:42.714014   12008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 03:24:42.714092   12008 kubeadm.go:309] 
	I0520 03:24:42.714142   12008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 03:24:42.714142   12008 kubeadm.go:309] 
	I0520 03:24:42.714552   12008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 03:24:42.714552   12008 kubeadm.go:309] 
	I0520 03:24:42.714552   12008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 03:24:42.714997   12008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 03:24:42.714997   12008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 03:24:42.714997   12008 kubeadm.go:309] 
	I0520 03:24:42.715329   12008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 03:24:42.715391   12008 kubeadm.go:309] 
	I0520 03:24:42.715391   12008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 03:24:42.715391   12008 kubeadm.go:309] 
	I0520 03:24:42.715622   12008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 03:24:42.715848   12008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 03:24:42.716051   12008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 03:24:42.716051   12008 kubeadm.go:309] 
	I0520 03:24:42.716275   12008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 03:24:42.716489   12008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 03:24:42.716489   12008 kubeadm.go:309] 
	I0520 03:24:42.716696   12008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token len5zc.qhmxvel8tyzi09us \
	I0520 03:24:42.716949   12008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 03:24:42.716949   12008 kubeadm.go:309] 	--control-plane 
	I0520 03:24:42.716949   12008 kubeadm.go:309] 
	I0520 03:24:42.717863   12008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 03:24:42.717863   12008 kubeadm.go:309] 
	I0520 03:24:42.718127   12008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token len5zc.qhmxvel8tyzi09us \
	I0520 03:24:42.718557   12008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 03:24:42.718608   12008 cni.go:84] Creating CNI manager for ""
	I0520 03:24:42.718672   12008 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:24:42.722003   12008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:24:42.737523   12008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:24:42.758352   12008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:24:42.799137   12008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:24:42.814866   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:42.814866   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-363100 minikube.k8s.io/updated_at=2024_05_20T03_24_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-363100 minikube.k8s.io/primary=true
	I0520 03:24:42.837013   12008 ops.go:34] apiserver oom_adj: -16
	I0520 03:24:43.003268   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:43.512378   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:44.003161   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:44.513056   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:45.003383   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:45.513850   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:46.002643   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:46.511727   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:47.000097   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:47.510126   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:48.002493   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:48.511922   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:49.006977   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:49.514477   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:50.010415   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:50.502934   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:51.014158   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:51.505149   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:52.014868   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:52.511098   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:53.008264   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:53.513723   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:54.007918   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:54.502575   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:55.007462   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:55.511144   12008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:24:55.656441   12008 kubeadm.go:1107] duration metric: took 12.8571386s to wait for elevateKubeSystemPrivileges
	W0520 03:24:55.656633   12008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 03:24:55.656633   12008 kubeadm.go:393] duration metric: took 26.3731392s to StartCluster
	I0520 03:24:55.656713   12008 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:55.656917   12008 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:24:55.657713   12008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:24:55.660772   12008 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.240.77 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:24:55.664258   12008 out.go:177] * Verifying Kubernetes components...
	I0520 03:24:55.660890   12008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 03:24:55.660772   12008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 03:24:55.666901   12008 addons.go:69] Setting cloud-spanner=true in profile "addons-363100"
	I0520 03:24:55.666991   12008 addons.go:69] Setting yakd=true in profile "addons-363100"
	I0520 03:24:55.667109   12008 addons.go:69] Setting ingress=true in profile "addons-363100"
	I0520 03:24:55.667109   12008 addons.go:234] Setting addon cloud-spanner=true in "addons-363100"
	I0520 03:24:55.667186   12008 addons.go:234] Setting addon yakd=true in "addons-363100"
	I0520 03:24:55.667186   12008 addons.go:69] Setting helm-tiller=true in profile "addons-363100"
	I0520 03:24:55.667248   12008 addons.go:234] Setting addon helm-tiller=true in "addons-363100"
	I0520 03:24:55.667248   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667248   12008 addons.go:69] Setting storage-provisioner=true in profile "addons-363100"
	I0520 03:24:55.667479   12008 addons.go:234] Setting addon storage-provisioner=true in "addons-363100"
	I0520 03:24:55.667479   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667573   12008 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-363100"
	I0520 03:24:55.667573   12008 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-363100"
	I0520 03:24:55.667573   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667777   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667777   12008 addons.go:69] Setting inspektor-gadget=true in profile "addons-363100"
	I0520 03:24:55.667777   12008 addons.go:234] Setting addon inspektor-gadget=true in "addons-363100"
	I0520 03:24:55.667777   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667777   12008 addons.go:69] Setting ingress-dns=true in profile "addons-363100"
	I0520 03:24:55.667777   12008 addons.go:234] Setting addon ingress-dns=true in "addons-363100"
	I0520 03:24:55.668243   12008 addons.go:69] Setting volumesnapshots=true in profile "addons-363100"
	I0520 03:24:55.668243   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.668243   12008 addons.go:234] Setting addon volumesnapshots=true in "addons-363100"
	I0520 03:24:55.668243   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.668243   12008 addons.go:69] Setting registry=true in profile "addons-363100"
	I0520 03:24:55.668243   12008 addons.go:234] Setting addon registry=true in "addons-363100"
	I0520 03:24:55.668243   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.668243   12008 addons.go:69] Setting metrics-server=true in profile "addons-363100"
	I0520 03:24:55.668243   12008 addons.go:234] Setting addon metrics-server=true in "addons-363100"
	I0520 03:24:55.669243   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.669243   12008 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-363100"
	I0520 03:24:55.669243   12008 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-363100"
	I0520 03:24:55.667186   12008 addons.go:234] Setting addon ingress=true in "addons-363100"
	I0520 03:24:55.669243   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667186   12008 addons.go:69] Setting gcp-auth=true in profile "addons-363100"
	I0520 03:24:55.669243   12008 mustload.go:65] Loading cluster: addons-363100
	I0520 03:24:55.667186   12008 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-363100"
	I0520 03:24:55.667248   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.670247   12008 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-363100"
	I0520 03:24:55.670247   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:24:55.667186   12008 addons.go:69] Setting default-storageclass=true in profile "addons-363100"
	I0520 03:24:55.672348   12008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-363100"
	I0520 03:24:55.673257   12008 config.go:182] Loaded profile config "addons-363100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:55.673257   12008 config.go:182] Loaded profile config "addons-363100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:55.680244   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.681242   12008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:24:55.681242   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.682250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.684247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.684247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.684247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.685250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.685250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.685250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.685250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.685250   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.687247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.687247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.687247   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:55.688244   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:24:56.999274   12008 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.3329147s)
	I0520 03:24:57.035683   12008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 03:24:56.999274   12008 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.318031s)
	I0520 03:24:57.056221   12008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:24:59.538611   12008 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.5024764s)
	I0520 03:24:59.538611   12008 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 03:24:59.544622   12008 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.4884002s)
	I0520 03:24:59.546623   12008 node_ready.go:35] waiting up to 6m0s for node "addons-363100" to be "Ready" ...
	I0520 03:24:59.759621   12008 node_ready.go:49] node "addons-363100" has status "Ready":"True"
	I0520 03:24:59.759621   12008 node_ready.go:38] duration metric: took 212.9986ms for node "addons-363100" to be "Ready" ...
	I0520 03:24:59.759621   12008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:24:59.954686   12008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace to be "Ready" ...
	W0520 03:25:00.344702   12008 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-363100" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0520 03:25:00.344702   12008 start.go:159] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0520 03:25:02.017301   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:02.201297   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.201297   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.210294   12008 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 03:25:02.206293   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.217412   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.232410   12008 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 03:25:02.238791   12008 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 03:25:02.242413   12008 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 03:25:02.245419   12008 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 03:25:02.243420   12008 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 03:25:02.248409   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 03:25:02.248409   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.249483   12008 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 03:25:02.249483   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 03:25:02.249483   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.544155   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.544155   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.545150   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:25:02.554270   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.554270   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.561274   12008 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 03:25:02.571273   12008 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 03:25:02.571273   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 03:25:02.572271   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.585775   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.586275   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.590123   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 03:25:02.593068   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 03:25:02.593068   12008 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 03:25:02.593068   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.611258   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.611258   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.614093   12008 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-363100"
	I0520 03:25:02.614724   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:25:02.616111   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.645992   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.645992   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.656089   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 03:25:02.668997   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 03:25:02.674992   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 03:25:02.678991   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 03:25:02.682995   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 03:25:02.685983   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 03:25:02.696990   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 03:25:02.703996   12008 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 03:25:02.707139   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 03:25:02.707139   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 03:25:02.707139   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.796970   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.796970   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.799960   12008 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 03:25:02.796970   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.803094   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.805958   12008 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 03:25:02.805958   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 03:25:02.805958   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.810966   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.810966   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.818982   12008 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 03:25:02.816540   12008 addons.go:234] Setting addon default-storageclass=true in "addons-363100"
	I0520 03:25:02.826622   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:25:02.830563   12008 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 03:25:02.830563   12008 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 03:25:02.830563   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.830563   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:02.916546   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:02.916546   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:02.922539   12008 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 03:25:02.935465   12008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 03:25:02.935465   12008 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 03:25:02.935465   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:03.112255   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:03.112255   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:03.123062   12008 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 03:25:03.130064   12008 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 03:25:03.130064   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 03:25:03.130064   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:03.153061   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:03.153061   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:03.192062   12008 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 03:25:03.357393   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:03.357393   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:03.386392   12008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:25:03.413396   12008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:25:03.413396   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:25:03.413396   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:03.391395   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:03.416393   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:03.418395   12008 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 03:25:03.421394   12008 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 03:25:03.421394   12008 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 03:25:03.421394   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:04.628631   12008 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 03:25:04.628631   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 03:25:04.628631   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:04.641618   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:07.038626   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:08.580454   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.580454   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.582497   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:08.609701   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.609701   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.609701   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:08.655664   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.655664   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.655664   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:08.706558   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.706558   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.706558   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:08.860582   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.860582   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.895583   12008 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 03:25:08.915590   12008 out.go:177]   - Using image docker.io/busybox:stable
	I0520 03:25:08.920578   12008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 03:25:08.920578   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 03:25:08.920578   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:08.923581   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.923581   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.923581   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:08.931583   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:08.931583   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:08.931583   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:09.222199   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:09.277834   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:09.277834   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:09.277834   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:09.376935   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:09.376935   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:09.376935   12008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:25:09.376935   12008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:25:09.380568   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:09.685822   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:09.685822   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:09.685822   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:09.866158   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:09.866158   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:09.866158   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:09.891488   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:09.891572   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:09.891653   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:10.054450   12008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 03:25:10.054450   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:10.430199   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:10.430199   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:10.430199   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:11.492357   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:12.070104   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:12.070104   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:12.070104   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:13.606947   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:15.579078   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:15.579078   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:15.579078   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:15.980081   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:16.054167   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.054274   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.056422   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.137187   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.137187   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.137187   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.196578   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.196830   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.196978   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.277000   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.277000   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.277000   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.319860   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:16.319955   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.320006   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:16.441803   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.441803   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.441803   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.500243   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 03:25:16.522249   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.522249   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.522249   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.581246   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:16.581246   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.581246   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:16.603243   12008 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 03:25:16.603243   12008 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 03:25:16.705635   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.705797   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.705999   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.709368   12008 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 03:25:16.709368   12008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 03:25:16.766502   12008 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 03:25:16.766502   12008 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 03:25:16.807623   12008 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 03:25:16.807623   12008 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 03:25:16.809622   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:16.809622   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:16.809622   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:16.855793   12008 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 03:25:16.855892   12008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 03:25:17.058241   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:17.058316   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:17.058605   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:17.077509   12008 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 03:25:17.077583   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 03:25:17.106269   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 03:25:17.106269   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 03:25:17.118188   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 03:25:17.119183   12008 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 03:25:17.119183   12008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 03:25:17.151010   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:17.151149   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:17.151381   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:17.173794   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 03:25:17.231156   12008 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 03:25:17.231156   12008 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 03:25:17.252922   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:17.252922   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:17.252922   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:17.274347   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 03:25:17.274519   12008 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 03:25:17.363830   12008 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 03:25:17.363984   12008 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 03:25:17.409806   12008 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 03:25:17.409896   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 03:25:17.425714   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 03:25:17.450997   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 03:25:17.451114   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 03:25:17.499281   12008 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 03:25:17.499281   12008 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 03:25:17.600266   12008 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 03:25:17.600266   12008 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 03:25:17.634281   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 03:25:17.696103   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 03:25:17.696286   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 03:25:17.748296   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 03:25:17.756290   12008 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 03:25:17.756290   12008 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 03:25:17.886819   12008 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 03:25:17.886819   12008 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 03:25:17.896155   12008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 03:25:17.896155   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 03:25:17.935354   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:25:17.939121   12008 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 03:25:17.939182   12008 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 03:25:17.969576   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:17.969576   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:17.970584   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:18.055374   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 03:25:18.055482   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 03:25:18.141955   12008 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 03:25:18.142043   12008 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 03:25:18.180645   12008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 03:25:18.180645   12008 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 03:25:18.195362   12008 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 03:25:18.195362   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 03:25:18.321759   12008 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 03:25:18.321799   12008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 03:25:18.333659   12008 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 03:25:18.333659   12008 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 03:25:18.478984   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:18.554871   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 03:25:18.571289   12008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 03:25:18.571384   12008 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 03:25:18.687879   12008 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 03:25:18.687879   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 03:25:18.724087   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 03:25:18.725101   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 03:25:18.814600   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 03:25:18.898304   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 03:25:19.054289   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:19.054793   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:19.055122   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:19.102166   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 03:25:19.343957   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 03:25:19.344081   12008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 03:25:19.514948   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:19.514948   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:19.515456   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:19.684615   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:19.684720   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:19.684797   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:19.847112   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 03:25:19.847112   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 03:25:20.177884   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 03:25:20.264512   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 03:25:20.264512   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 03:25:20.489003   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:20.680496   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:25:20.828768   12008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 03:25:20.845912   12008 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 03:25:20.845912   12008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 03:25:21.244260   12008 addons.go:234] Setting addon gcp-auth=true in "addons-363100"
	I0520 03:25:21.244405   12008 host.go:66] Checking if "addons-363100" exists ...
	I0520 03:25:21.246012   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:21.551146   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 03:25:22.988896   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:23.779356   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:23.779356   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:23.799859   12008 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 03:25:23.799859   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-363100 ).state
	I0520 03:25:25.415501   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:26.475583   12008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:25:26.476092   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:26.476696   12008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-363100 ).networkadapters[0]).ipaddresses[0]
	I0520 03:25:27.542582   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:29.444360   12008 main.go:141] libmachine: [stdout =====>] : 172.25.240.77
	
	I0520 03:25:29.444389   12008 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:25:29.444606   12008 sshutil.go:53] new ssh client: &{IP:172.25.240.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-363100\id_rsa Username:docker}
	I0520 03:25:29.756581   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.2550427s)
	I0520 03:25:29.756581   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (12.638386s)
	I0520 03:25:29.756581   12008 addons.go:470] Verifying addon ingress=true in "addons-363100"
	I0520 03:25:29.756740   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.582938s)
	I0520 03:25:29.760859   12008 out.go:177] * Verifying ingress addon...
	I0520 03:25:29.756826   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.3301035s)
	I0520 03:25:29.757048   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.1217513s)
	I0520 03:25:29.757164   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.0087998s)
	I0520 03:25:29.757251   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.8218902s)
	I0520 03:25:29.757310   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.2024324s)
	I0520 03:25:29.757397   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.9427897s)
	I0520 03:25:29.757639   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.8592884s)
	I0520 03:25:29.757750   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.6555774s)
	I0520 03:25:29.757879   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.5799243s)
	I0520 03:25:29.757879   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.0773775s)
	I0520 03:25:29.761021   12008 addons.go:470] Verifying addon registry=true in "addons-363100"
	W0520 03:25:29.761063   12008 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 03:25:29.764331   12008 out.go:177] * Verifying registry addon...
	I0520 03:25:29.761248   12008 addons.go:470] Verifying addon metrics-server=true in "addons-363100"
	I0520 03:25:29.761248   12008 retry.go:31] will retry after 182.444307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 03:25:29.766185   12008 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 03:25:29.766185   12008 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-363100 service yakd-dashboard -n yakd-dashboard
	
	I0520 03:25:29.772930   12008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 03:25:29.798241   12008 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 03:25:29.798241   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:29.798835   12008 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 03:25:29.798835   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0520 03:25:29.809559   12008 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 03:25:29.968146   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 03:25:29.975782   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:30.297633   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:30.298628   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:30.817623   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:30.817935   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:31.304831   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:31.316093   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:31.797995   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.2468429s)
	I0520 03:25:31.797995   12008 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.9981316s)
	I0520 03:25:31.797995   12008 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-363100"
	I0520 03:25:31.804517   12008 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 03:25:31.812527   12008 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 03:25:31.801501   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:31.813512   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:31.826510   12008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 03:25:31.840531   12008 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 03:25:31.844503   12008 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 03:25:31.844503   12008 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 03:25:31.873525   12008 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 03:25:31.873525   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:31.979235   12008 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 03:25:31.979235   12008 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 03:25:31.981241   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:32.130642   12008 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 03:25:32.130642   12008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 03:25:32.268482   12008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 03:25:32.286003   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:32.286462   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:32.349541   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:32.788317   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:32.793410   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:32.854161   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:33.181532   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.2133192s)
	I0520 03:25:33.286805   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:33.291856   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:33.341111   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:33.803275   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:33.804554   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:33.861250   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:34.052873   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:34.124122   12008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.8554053s)
	I0520 03:25:34.137769   12008 addons.go:470] Verifying addon gcp-auth=true in "addons-363100"
	I0520 03:25:34.141005   12008 out.go:177] * Verifying gcp-auth addon...
	I0520 03:25:34.149069   12008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 03:25:34.190421   12008 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 03:25:34.190421   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:34.289447   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:34.296995   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:34.351555   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:34.811326   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:34.811740   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:34.816652   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:34.845294   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:35.163468   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:35.290053   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:35.294156   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:35.353229   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:35.667562   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:35.784088   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:35.784088   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:35.846453   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:36.158720   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:36.289654   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:36.289654   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:36.355644   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:36.467623   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:36.670328   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:36.782935   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:36.785785   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:36.851282   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:37.160485   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:37.291085   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:37.291748   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:37.352296   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:37.669525   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:37.789124   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:37.790124   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:37.853151   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:38.163113   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:38.295956   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:38.297697   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:38.354907   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:38.668743   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:38.784849   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:38.785854   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:38.856294   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:38.964460   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:39.166692   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:39.281468   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:39.282445   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:39.350472   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:39.657158   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:39.784665   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:39.786682   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:39.850676   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:40.165291   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:40.278629   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:40.282668   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:40.341667   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:40.660166   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:40.787907   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:40.790609   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:40.850664   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:40.964784   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:41.164899   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:41.278548   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:41.284098   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:41.346086   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:41.657523   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:41.786438   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:41.786438   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:41.852417   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:42.162711   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:42.292744   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:42.292861   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:42.357014   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:42.667362   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:42.779375   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:42.783330   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:42.849165   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:42.972563   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:43.159778   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:43.345902   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:43.348916   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:43.353888   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:43.667208   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:43.784284   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:43.788876   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:43.845737   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:44.157030   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:44.285422   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:44.286288   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:44.353427   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:44.664528   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:44.795630   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:44.795981   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:44.856333   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:45.239895   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:45.293018   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:45.294922   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:45.355963   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:45.468200   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:45.655632   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:45.784388   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:45.786806   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:45.847879   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:46.161929   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:46.293888   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:46.294218   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:46.357923   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:46.659007   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:46.783273   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:46.783789   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:46.846021   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:47.859613   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:47.860840   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:47.862629   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:47.862841   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:47.863623   12008 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"False"
	I0520 03:25:47.866563   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:47.873053   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:47.873399   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:47.874718   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:48.164050   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:48.293610   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:48.293772   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:48.355574   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:48.670661   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:48.784589   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:48.784589   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:48.854793   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:49.160234   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:49.289765   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:49.291317   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:49.352465   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:49.669232   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:49.790573   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:49.791095   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:49.844965   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:49.975021   12008 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:49.975021   12008 pod_ready.go:81] duration metric: took 50.020305s for pod "coredns-7db6d8ff4d-g9sxx" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:49.975021   12008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vwvqd" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:50.160052   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:50.286849   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:50.286849   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:50.353078   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:50.666877   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:50.780527   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:50.786854   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:50.842725   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:50.987333   12008 pod_ready.go:92] pod "coredns-7db6d8ff4d-vwvqd" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:50.987333   12008 pod_ready.go:81] duration metric: took 1.0123113s for pod "coredns-7db6d8ff4d-vwvqd" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:50.987333   12008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:50.994180   12008 pod_ready.go:92] pod "etcd-addons-363100" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:50.994180   12008 pod_ready.go:81] duration metric: took 6.8468ms for pod "etcd-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:50.994180   12008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.001988   12008 pod_ready.go:92] pod "kube-apiserver-addons-363100" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:51.001988   12008 pod_ready.go:81] duration metric: took 7.808ms for pod "kube-apiserver-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.001988   12008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.009880   12008 pod_ready.go:92] pod "kube-controller-manager-addons-363100" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:51.009880   12008 pod_ready.go:81] duration metric: took 7.8337ms for pod "kube-controller-manager-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.009880   12008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-czj7g" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.161354   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:51.267941   12008 pod_ready.go:92] pod "kube-proxy-czj7g" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:51.267941   12008 pod_ready.go:81] duration metric: took 258.0606ms for pod "kube-proxy-czj7g" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.267941   12008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.284171   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:51.284784   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:51.350546   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:51.668156   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:51.671435   12008 pod_ready.go:92] pod "kube-scheduler-addons-363100" in "kube-system" namespace has status "Ready":"True"
	I0520 03:25:51.671491   12008 pod_ready.go:81] duration metric: took 403.5493ms for pod "kube-scheduler-addons-363100" in "kube-system" namespace to be "Ready" ...
	I0520 03:25:51.671548   12008 pod_ready.go:38] duration metric: took 51.9118381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:25:51.671658   12008 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:25:51.685306   12008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:25:51.753193   12008 api_server.go:72] duration metric: took 56.0923407s to wait for apiserver process to appear ...
	I0520 03:25:51.753267   12008 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:25:51.753371   12008 api_server.go:253] Checking apiserver healthz at https://172.25.240.77:8443/healthz ...
	I0520 03:25:51.762668   12008 api_server.go:279] https://172.25.240.77:8443/healthz returned 200:
	ok
	I0520 03:25:51.765777   12008 api_server.go:141] control plane version: v1.30.1
	I0520 03:25:51.765860   12008 api_server.go:131] duration metric: took 12.4531ms to wait for apiserver health ...
	I0520 03:25:51.765860   12008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 03:25:52.259562   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:52.259562   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:52.259562   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:52.265959   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:52.278211   12008 system_pods.go:59] 19 kube-system pods found
	I0520 03:25:52.278211   12008 system_pods.go:61] "coredns-7db6d8ff4d-g9sxx" [66da782a-b14b-478d-a203-e45218cbe2a3] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "coredns-7db6d8ff4d-vwvqd" [a3444048-2b12-498e-a075-e907c6851721] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "csi-hostpath-attacher-0" [269b44a1-b7eb-4bd0-a0d9-a294474e6aad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0520 03:25:52.278211   12008 system_pods.go:61] "csi-hostpath-resizer-0" [4abc279c-e959-4121-aa54-a3f8c907ea0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0520 03:25:52.278211   12008 system_pods.go:61] "csi-hostpathplugin-xqndx" [f97ca9e6-ada5-48f9-95d7-8889cf8a5bd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0520 03:25:52.278211   12008 system_pods.go:61] "etcd-addons-363100" [2f02fdd7-6be8-47c6-b22d-e8399c5a7cea] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "kube-apiserver-addons-363100" [7441b578-3959-4444-a630-a46f804edb6a] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "kube-controller-manager-addons-363100" [62229d51-2dfd-4203-9250-5e1551ded585] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "kube-ingress-dns-minikube" [277e63b1-4132-4823-97ae-2e4016301755] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 03:25:52.278211   12008 system_pods.go:61] "kube-proxy-czj7g" [3863df38-8e13-4a13-8706-f1a7cd200331] Running
	I0520 03:25:52.278211   12008 system_pods.go:61] "kube-scheduler-addons-363100" [905d8e6b-f677-43a6-a92a-dad791fabe2c] Running
	I0520 03:25:52.278760   12008 system_pods.go:61] "metrics-server-c59844bb4-g6npj" [3ec439f0-a4e2-4503-97f6-11c20480f520] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 03:25:52.278760   12008 system_pods.go:61] "nvidia-device-plugin-daemonset-bvjl2" [bec45794-951c-4586-b0d5-933b3290df13] Running
	I0520 03:25:52.278760   12008 system_pods.go:61] "registry-proxy-c6b67" [f7feab3d-2695-4f4a-92be-d20eb369c7fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0520 03:25:52.278812   12008 system_pods.go:61] "registry-zwwpn" [3869227a-4098-41f0-bdca-bbf64dc302a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0520 03:25:52.278812   12008 system_pods.go:61] "snapshot-controller-745499f584-l2ttj" [1e89a99a-bc5a-4f28-a4c3-65b4a6ff8aa5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0520 03:25:52.278812   12008 system_pods.go:61] "snapshot-controller-745499f584-q74px" [5a54eaa0-2555-4917-b5b2-dc4139ec4b8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0520 03:25:52.278812   12008 system_pods.go:61] "storage-provisioner" [0835bc89-5784-4a26-8347-9f77673996cf] Running
	I0520 03:25:52.278812   12008 system_pods.go:61] "tiller-deploy-6677d64bcd-f4zcs" [8f839153-72eb-4331-b147-6db46c4d13ee] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0520 03:25:52.278812   12008 system_pods.go:74] duration metric: took 512.9517ms to wait for pod list to return data ...
	I0520 03:25:52.278812   12008 default_sa.go:34] waiting for default service account to be created ...
	I0520 03:25:52.757123   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:52.757123   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:52.757813   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:52.757813   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:52.765721   12008 default_sa.go:45] found service account: "default"
	I0520 03:25:52.765721   12008 default_sa.go:55] duration metric: took 486.9093ms for default service account to be created ...
	I0520 03:25:52.765802   12008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 03:25:52.779476   12008 system_pods.go:86] 19 kube-system pods found
	I0520 03:25:52.779476   12008 system_pods.go:89] "coredns-7db6d8ff4d-g9sxx" [66da782a-b14b-478d-a203-e45218cbe2a3] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "coredns-7db6d8ff4d-vwvqd" [a3444048-2b12-498e-a075-e907c6851721] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "csi-hostpath-attacher-0" [269b44a1-b7eb-4bd0-a0d9-a294474e6aad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0520 03:25:52.779476   12008 system_pods.go:89] "csi-hostpath-resizer-0" [4abc279c-e959-4121-aa54-a3f8c907ea0a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0520 03:25:52.779476   12008 system_pods.go:89] "csi-hostpathplugin-xqndx" [f97ca9e6-ada5-48f9-95d7-8889cf8a5bd9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0520 03:25:52.779476   12008 system_pods.go:89] "etcd-addons-363100" [2f02fdd7-6be8-47c6-b22d-e8399c5a7cea] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "kube-apiserver-addons-363100" [7441b578-3959-4444-a630-a46f804edb6a] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "kube-controller-manager-addons-363100" [62229d51-2dfd-4203-9250-5e1551ded585] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "kube-ingress-dns-minikube" [277e63b1-4132-4823-97ae-2e4016301755] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 03:25:52.779476   12008 system_pods.go:89] "kube-proxy-czj7g" [3863df38-8e13-4a13-8706-f1a7cd200331] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "kube-scheduler-addons-363100" [905d8e6b-f677-43a6-a92a-dad791fabe2c] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "metrics-server-c59844bb4-g6npj" [3ec439f0-a4e2-4503-97f6-11c20480f520] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 03:25:52.779476   12008 system_pods.go:89] "nvidia-device-plugin-daemonset-bvjl2" [bec45794-951c-4586-b0d5-933b3290df13] Running
	I0520 03:25:52.779476   12008 system_pods.go:89] "registry-proxy-c6b67" [f7feab3d-2695-4f4a-92be-d20eb369c7fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0520 03:25:52.779476   12008 system_pods.go:89] "registry-zwwpn" [3869227a-4098-41f0-bdca-bbf64dc302a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0520 03:25:52.779476   12008 system_pods.go:89] "snapshot-controller-745499f584-l2ttj" [1e89a99a-bc5a-4f28-a4c3-65b4a6ff8aa5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0520 03:25:52.779476   12008 system_pods.go:89] "snapshot-controller-745499f584-q74px" [5a54eaa0-2555-4917-b5b2-dc4139ec4b8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0520 03:25:52.779476   12008 system_pods.go:89] "storage-provisioner" [0835bc89-5784-4a26-8347-9f77673996cf] Running
	I0520 03:25:52.780022   12008 system_pods.go:89] "tiller-deploy-6677d64bcd-f4zcs" [8f839153-72eb-4331-b147-6db46c4d13ee] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0520 03:25:52.780022   12008 system_pods.go:126] duration metric: took 14.2195ms to wait for k8s-apps to be running ...
	I0520 03:25:52.780067   12008 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 03:25:52.787747   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:52.788343   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:52.792881   12008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:25:52.838650   12008 system_svc.go:56] duration metric: took 58.5823ms WaitForService to wait for kubelet
	I0520 03:25:52.838650   12008 kubeadm.go:576] duration metric: took 57.1777972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:25:52.838650   12008 node_conditions.go:102] verifying NodePressure condition ...
	I0520 03:25:52.843944   12008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 03:25:52.843944   12008 node_conditions.go:123] node cpu capacity is 2
	I0520 03:25:52.843944   12008 node_conditions.go:105] duration metric: took 5.2938ms to run NodePressure ...
	I0520 03:25:52.843944   12008 start.go:240] waiting for startup goroutines ...
	I0520 03:25:52.845206   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:53.160774   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:53.304203   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:53.304830   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:53.369829   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:53.669492   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:53.826692   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:53.827276   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:53.855998   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:54.159143   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:54.294151   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:54.297737   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:54.351752   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:54.673397   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:54.782036   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:54.794233   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:54.844173   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:55.159275   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:55.289905   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:55.289905   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:55.352889   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:55.670159   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:55.780029   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:55.783123   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:55.847693   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:56.159588   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:56.293518   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:56.293649   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:56.353945   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:56.669244   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:56.780368   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:56.782961   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:56.844440   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:57.159227   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:57.287667   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:57.289328   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:57.351962   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:57.666485   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:57.783003   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:57.783569   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:57.844818   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:58.159319   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:58.287359   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:58.292088   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:58.352387   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:58.667365   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:58.779587   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:58.781547   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:58.848491   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:59.164494   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:25:59.292166   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:25:59.292366   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:25:59.359565   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:25:59.665688   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:00.143757   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:00.148183   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:00.152516   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:00.159037   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:00.284720   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:00.284794   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:00.353056   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:00.655375   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:00.781947   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:00.784718   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:00.862415   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:01.159141   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:01.284251   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:01.284251   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:01.352275   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:01.659884   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:01.786707   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:01.788074   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:01.849404   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:02.165834   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:02.295291   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:02.297438   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:02.355330   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:02.666049   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:02.794310   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:02.799158   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:02.856423   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:03.169176   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:03.281480   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:03.281480   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:03.344702   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:03.666656   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:03.790357   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:03.794893   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:03.854553   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:04.169162   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:04.284418   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:04.286550   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:04.347389   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:04.704001   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:04.792096   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:04.793983   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:04.857845   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:05.168510   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:05.280730   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:05.283750   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:05.343106   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:05.659200   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:05.797814   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:05.798437   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:05.847781   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:06.163489   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:06.291149   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:06.294276   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:06.355550   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:06.672536   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:06.781929   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:06.782490   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:06.845089   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:07.161270   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:07.290709   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:07.290709   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:07.353724   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:07.668984   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:07.781965   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:07.782965   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:07.849948   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:08.167783   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:08.280876   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:08.280876   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:08.342479   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:08.658838   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:08.788276   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:08.789303   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:08.851022   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:09.156760   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:09.287443   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:09.289847   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:09.354672   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:09.667068   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:09.795996   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:09.796325   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:09.855063   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:10.169717   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:10.287284   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:10.291520   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:10.345485   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:10.658638   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:10.788419   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:10.790242   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:10.851404   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:11.167915   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:11.282697   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:11.283354   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:11.346834   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:11.660251   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:11.787376   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:11.790337   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:11.852317   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:12.167953   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:12.276997   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:12.282049   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:12.342610   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:12.658943   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:12.790959   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:12.791959   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:12.850112   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:13.168559   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:13.282633   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:13.283822   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:13.348305   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:13.660187   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:13.790419   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:13.792811   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:13.854938   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:14.168799   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:14.284731   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:14.289516   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:14.347206   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:14.662834   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:14.793435   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:14.793917   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:14.856892   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:15.160323   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:15.288983   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:15.288983   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:15.372795   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:15.668548   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:15.783137   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:15.786716   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:15.843889   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:16.155072   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:16.812611   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:16.819954   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:16.821134   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:16.826724   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:16.828712   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:16.834715   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:16.842959   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:18.250490   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:18.257640   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:18.265810   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:18.265810   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:18.266072   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:18.272809   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:18.275213   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:18.277549   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:18.281976   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:18.286497   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:18.355219   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:18.670239   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:18.778673   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:18.784945   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 03:26:18.844530   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:19.158527   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:19.292656   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:19.293446   12008 kapi.go:107] duration metric: took 49.5204839s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 03:26:19.345746   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:19.660782   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:19.786781   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:19.852973   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:20.168312   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:20.280706   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:20.347001   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:20.662349   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:20.791325   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:20.857099   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:21.168744   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:21.279979   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:21.344160   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:21.657447   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:21.785240   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:21.850215   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:22.167326   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:22.293525   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:22.344086   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:22.655876   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:22.783014   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:22.847382   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:23.164615   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:23.289030   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:23.356224   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:23.658887   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:23.785352   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:23.849652   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:24.164067   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:24.293071   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:24.355053   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:24.654198   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:24.781072   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:24.847353   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:25.160618   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:25.289985   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:25.362989   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:25.667651   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:25.777746   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:25.843831   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:26.158534   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:26.284510   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:26.354845   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:26.663383   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:26.793451   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:26.845219   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:27.157884   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:27.286035   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:27.345270   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:27.662039   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:27.789529   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:27.855067   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:28.162800   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:28.286125   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:28.353470   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:28.665510   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:28.778434   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:28.845238   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:29.164297   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:29.288620   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:29.352554   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:29.665005   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:29.793990   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:29.843368   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:30.153878   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:30.284022   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:30.356616   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:30.664891   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:30.791710   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:30.856957   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:31.160205   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:31.290492   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:31.353035   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:31.668985   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:31.785419   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:31.850750   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:32.302303   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:32.303083   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:32.358398   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:32.667432   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:32.780146   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:32.847745   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:33.161160   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:33.288459   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:33.353637   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:33.666267   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:33.777818   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:33.843568   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:34.159392   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:34.283580   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:34.349865   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:34.669992   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:34.789804   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:34.873020   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:35.169639   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:35.280290   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:35.344561   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:35.659802   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:35.790519   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:35.855140   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:36.156616   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:36.288723   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:36.351702   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:36.664909   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:37.356272   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:37.360308   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:37.361093   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:37.368974   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:37.369032   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:37.657343   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:37.783201   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:37.847817   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:38.157134   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:38.297456   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:38.349480   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:38.659934   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:38.789668   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:38.852889   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:39.169629   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:39.277706   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:39.343019   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:39.658805   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:39.797484   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:39.851284   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:40.164551   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:40.291224   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:40.356510   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:40.656140   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:40.781694   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:40.849279   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:41.161403   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:41.289286   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:41.352959   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:41.669478   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:41.782542   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:41.850013   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:42.168527   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:42.278847   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:42.346184   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:43.094387   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:43.103564   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:43.106955   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:43.516120   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:43.516120   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:43.519793   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:43.658024   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:43.786992   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:43.854891   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:44.168139   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:44.278829   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:44.345298   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:44.680933   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:44.784449   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:44.843803   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:45.177162   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:45.290764   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:45.356560   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:45.666813   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:45.795442   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:45.842731   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:46.160825   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:46.286141   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:46.364816   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:46.664843   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:46.790477   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:46.856520   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:47.170526   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:47.281662   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:47.347974   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:47.661084   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:47.790091   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:47.851941   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:48.172671   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:48.282966   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:48.343536   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:48.656053   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:48.790340   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:48.852275   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:49.166365   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:49.279703   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:49.364105   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:49.702126   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:50.059915   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:50.065809   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:50.156629   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:50.283268   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:50.348206   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:50.661061   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:50.785400   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:50.854514   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:51.187617   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:51.292338   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:51.352301   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:51.669072   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:51.790322   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:51.854947   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:52.164369   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:52.278330   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:52.344352   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:52.700795   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:52.786868   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:52.850347   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:53.167032   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:53.368366   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:53.368366   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:53.657111   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:53.784536   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:53.848696   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:54.163851   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:54.290072   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:54.344145   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:54.659015   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:54.792937   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:54.852286   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:55.171498   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:55.282508   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:55.349007   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:55.659562   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:55.787445   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:55.853738   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:56.167450   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:56.279281   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:56.342885   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:56.667917   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:56.789666   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:56.855672   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:57.154726   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:57.285025   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:57.351723   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:57.665784   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:57.912120   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:57.912120   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:58.158198   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:58.285426   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:58.352332   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:58.677340   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:58.785761   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:58.848943   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:59.769041   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:26:59.770273   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:59.771542   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:59.971240   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:26:59.972249   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:26:59.973394   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:00.525608   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:00.525853   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:00.525853   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:00.659750   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:00.785137   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:00.851887   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:01.160651   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:01.286978   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:01.352738   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:01.663708   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:01.792917   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:01.854446   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:02.154924   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:02.285634   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:02.351943   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:02.664591   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:02.793093   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:02.855093   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:03.155229   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:03.283852   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:03.353495   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:03.664927   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:04.394555   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:04.396643   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:04.404180   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:04.404180   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:04.411251   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:04.656597   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:04.797852   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:04.852569   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:05.162809   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:05.286597   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:05.353185   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:05.662356   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:05.789124   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:05.855236   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:06.167015   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:06.280149   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:06.346003   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:06.662116   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:06.789250   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:06.865820   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:07.165551   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:07.278496   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:07.346566   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:07.660139   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:07.785049   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:07.848126   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:08.165715   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:08.293873   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:08.343199   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:08.668737   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:08.781016   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:08.855093   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:09.157675   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:09.287246   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:09.358916   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:09.668754   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:09.780868   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:09.858398   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:10.173384   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:10.284478   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:10.349357   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:10.661299   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:10.786120   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:10.852516   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:11.163446   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:11.295326   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:11.363665   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:11.668644   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:11.782652   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:11.852292   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:12.599438   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:12.604195   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:12.604631   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:12.699891   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:12.790157   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:12.856875   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:13.154395   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:13.282973   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:13.351631   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:13.662928   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:13.789617   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:13.854140   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:14.166840   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:14.286392   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:14.350961   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:14.707199   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:14.791659   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:14.968486   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:15.168238   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:15.280980   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:15.346905   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:15.660574   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:15.793961   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:15.857932   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:16.169448   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:16.283852   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:16.348510   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:16.671894   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:16.789484   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:16.859282   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:17.165701   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:17.277998   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:17.344540   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:17.654831   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:17.784898   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:17.848269   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:18.164423   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:18.292913   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:18.355641   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:18.670361   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:19.163035   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:19.163035   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:19.183770   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:19.285310   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:19.353934   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:19.668752   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:19.781138   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:19.842955   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:20.154985   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:20.292235   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:20.346626   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:20.658201   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:20.787260   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:20.854007   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:21.169782   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:21.281594   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:21.343457   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:21.657117   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:22.403378   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:22.403838   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:22.406358   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:22.412113   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:22.416813   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:22.660205   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:22.793408   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:22.864337   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:23.176591   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:23.281428   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:23.350049   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:23.661382   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:23.792112   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:23.858022   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:24.170458   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:24.287924   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:24.353989   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:24.673254   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:24.793338   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:24.847331   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:25.157405   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:25.281923   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:25.349328   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:25.660949   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:25.789829   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:25.854070   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:26.166456   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:26.291067   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:26.349063   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:26.661774   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:26.785970   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:26.853991   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:27.165523   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:27.295920   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:27.354496   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:27.667137   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:27.799893   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:27.857616   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:28.164452   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:28.292805   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:28.357550   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:28.655166   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:28.782961   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:28.847569   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:29.163389   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:29.290157   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:29.354401   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:29.669259   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:29.780915   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:29.842809   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:30.170213   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:30.278877   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:30.362291   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:30.658482   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:30.782869   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:30.846929   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:31.173267   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:31.292515   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:31.358561   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:31.657901   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:31.782861   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:31.848842   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:32.157515   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:32.286715   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:32.353703   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:32.664807   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:32.790752   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:32.855929   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:33.164430   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:33.295809   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:33.355807   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:33.670014   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:33.780341   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:33.866173   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:34.170793   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:34.280384   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:34.346273   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:34.667017   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:34.783048   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:34.846040   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:35.168892   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:35.291769   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:35.369771   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:35.655963   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:35.785671   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:35.851152   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:36.165921   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:36.279742   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:36.343882   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:36.660907   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:37.300583   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:37.300928   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:37.301098   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:37.307242   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:37.359483   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:37.666963   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:37.779991   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:37.842955   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:38.168933   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:38.297176   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:38.360673   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:38.654122   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:38.783144   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:38.847012   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:39.159952   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:39.290970   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:39.352656   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:39.664379   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:39.792046   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:39.855598   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:40.169406   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:40.660940   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:40.664758   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:40.667576   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:40.798003   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:40.845745   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:41.161759   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:41.291925   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:41.354221   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:41.668154   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:41.783298   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:41.842052   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 03:27:42.164496   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:42.289087   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:42.358801   12008 kapi.go:107] duration metric: took 2m10.5322048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 03:27:42.663599   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:42.792395   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:43.159116   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:43.283956   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:43.657918   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:43.782320   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:44.159405   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:44.287147   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:44.659503   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:44.784487   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:45.160699   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:45.287648   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:45.664365   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:45.790763   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:46.168200   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:46.277844   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:46.669160   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:46.784363   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:47.165190   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:47.287063   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:47.668137   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:47.777722   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:48.161736   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:48.289175   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:48.669683   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:48.780736   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:49.160836   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:49.285497   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:49.804133   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:49.805889   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:50.359259   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:50.362329   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:50.667332   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:50.783566   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:51.170892   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:51.290611   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:51.654882   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:51.783492   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:52.165908   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:52.279725   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:52.658049   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:52.783960   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:53.163604   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:53.291411   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:53.655886   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:53.784791   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:54.169577   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:54.281870   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:54.666318   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:54.791582   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:55.171501   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:55.282030   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:55.664507   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:55.789590   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:56.169349   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:56.280094   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:56.660033   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:56.789950   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:57.168701   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:57.282392   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:57.660931   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:57.788153   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:58.155915   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:58.284757   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:58.664267   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:58.794215   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:59.164507   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:59.290927   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:27:59.915258   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:27:59.925869   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:28:00.170310   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:00.280003   12008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 03:28:00.656023   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:00.793459   12008 kapi.go:107] duration metric: took 2m31.0271731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 03:28:01.236438   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:01.655924   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:02.164714   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:03.195724   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:03.203384   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:03.655285   12008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 03:28:04.160705   12008 kapi.go:107] duration metric: took 2m30.0115948s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 03:28:04.164021   12008 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-363100 cluster.
	I0520 03:28:04.167479   12008 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 03:28:04.169923   12008 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 03:28:04.172651   12008 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0520 03:28:04.177184   12008 addons.go:505] duration metric: took 3m8.5162418s for enable addons: enabled=[helm-tiller nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0520 03:28:04.177184   12008 start.go:245] waiting for cluster config update ...
	I0520 03:28:04.177184   12008 start.go:254] writing updated cluster config ...
	I0520 03:28:04.193012   12008 ssh_runner.go:195] Run: rm -f paused
	I0520 03:28:04.478032   12008 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 03:28:04.485660   12008 out.go:177] * Done! kubectl is now configured to use "addons-363100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.654885808Z" level=info msg="shim disconnected" id=1521373aaeefc2c6e62b296eb18270e469807c7e6dd0e758e0c4ac4f914e9aef namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.655277608Z" level=warning msg="cleaning up after shim disconnected" id=1521373aaeefc2c6e62b296eb18270e469807c7e6dd0e758e0c4ac4f914e9aef namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.655290408Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1321]: time="2024-05-20T10:28:56.923754074Z" level=info msg="ignoring event" container=b56d8a47b75198ce3e0613dbb5bfc29194a6030598665b36ac759ddb073a17aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.925113574Z" level=info msg="shim disconnected" id=b56d8a47b75198ce3e0613dbb5bfc29194a6030598665b36ac759ddb073a17aa namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.925361974Z" level=warning msg="cleaning up after shim disconnected" id=b56d8a47b75198ce3e0613dbb5bfc29194a6030598665b36ac759ddb073a17aa namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.925477474Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 10:28:56 addons-363100 dockerd[1328]: time="2024-05-20T10:28:56.965430284Z" level=warning msg="cleanup warnings time=\"2024-05-20T10:28:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1321]: time="2024-05-20T10:28:59.723764656Z" level=info msg="ignoring event" container=4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.725309358Z" level=info msg="shim disconnected" id=4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7 namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.725425859Z" level=warning msg="cleaning up after shim disconnected" id=4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7 namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.725437759Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.935707711Z" level=info msg="shim disconnected" id=14b6b7ed3d3800122bd7e9c1d581eca90d3a3c3766e5cd93c45194be98375efe namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.935800711Z" level=warning msg="cleaning up after shim disconnected" id=14b6b7ed3d3800122bd7e9c1d581eca90d3a3c3766e5cd93c45194be98375efe namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1328]: time="2024-05-20T10:28:59.935814011Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 10:28:59 addons-363100 dockerd[1321]: time="2024-05-20T10:28:59.936362512Z" level=info msg="ignoring event" container=14b6b7ed3d3800122bd7e9c1d581eca90d3a3c3766e5cd93c45194be98375efe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:29:05 addons-363100 cri-dockerd[1227]: time="2024-05-20T10:29:05Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a"
	May 20 10:29:05 addons-363100 dockerd[1328]: time="2024-05-20T10:29:05.848245138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:29:05 addons-363100 dockerd[1328]: time="2024-05-20T10:29:05.848516838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:29:05 addons-363100 dockerd[1328]: time="2024-05-20T10:29:05.848541738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:29:05 addons-363100 dockerd[1328]: time="2024-05-20T10:29:05.849061438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:29:06 addons-363100 dockerd[1321]: time="2024-05-20T10:29:06.964082425Z" level=info msg="ignoring event" container=84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:29:06 addons-363100 dockerd[1328]: time="2024-05-20T10:29:06.966666126Z" level=info msg="shim disconnected" id=84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033 namespace=moby
	May 20 10:29:06 addons-363100 dockerd[1328]: time="2024-05-20T10:29:06.966838326Z" level=warning msg="cleaning up after shim disconnected" id=84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033 namespace=moby
	May 20 10:29:06 addons-363100 dockerd[1328]: time="2024-05-20T10:29:06.966857626Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	84e231a7dda24       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a                            6 seconds ago        Exited              gadget                                   4                   f88d4aebef610       gadget-cvvhl
	2434af7de67ab       a416a98b71e22                                                                                                                                31 seconds ago       Exited              helper-pod                               0                   2c8c6355eb5cb       helper-pod-delete-pvc-94e3d7fc-7011-49fb-8aa0-2b4343d236b6
	bf223ff8ebf87       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   ca8876f5f4d1e       gcp-auth-5db96cd9b4-v5wrs
	a9deff69272e4       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   aab58ea06972f       ingress-nginx-controller-768f948f8f-zbcwl
	09bdc60f2feb6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	a74394a7cbf34       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	4f5729cf09546       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	869e4633cfe89       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	22ad1289e3944       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	fb7c2d411ab9a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   046afb75fcf17       csi-hostpath-resizer-0
	b16b8d60e8792       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   1d1f68dc3207d       csi-hostpathplugin-xqndx
	74c43c00710f9       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   67d543abcb695       csi-hostpath-attacher-0
	ce7fad4ad640a       684c5ea3b61b2                                                                                                                                About a minute ago   Exited              patch                                    1                   b0cd15d50d426       ingress-nginx-admission-patch-xgccn
	9fb2ac5b6098c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              create                                   0                   2ad8161718891       ingress-nginx-admission-create-2jfb5
	9b9ba7e54accd       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   f9443fa8aa155       local-path-provisioner-8d985888d-778sw
	d07229292c747       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   703080ab8c4fb       snapshot-controller-745499f584-q74px
	0ee72e2ff97a8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   85bcb59654015       snapshot-controller-745499f584-l2ttj
	ed6b548814897       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   b0de03722f43f       yakd-dashboard-5ddbf7d777-t959q
	0e57ae8c4bc0a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   207bf16ab37c0       kube-ingress-dns-minikube
	fe3bbd1bb708b       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   037a22dffbef5       cloud-spanner-emulator-6fcd4f6f98-62drz
	a0557c8b270cc       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   722d55fc82d38       nvidia-device-plugin-daemonset-bvjl2
	2a33a55117c4a       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   ad0fc1e37d659       storage-provisioner
	d03ba69cffb95       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   c7f91e6399b8b       coredns-7db6d8ff4d-g9sxx
	f352c62c5f914       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   ada798282f18b       coredns-7db6d8ff4d-vwvqd
	c0142927d7f66       747097150317f                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   256da6e63491b       kube-proxy-czj7g
	367dcacfa4d16       25a1387cdab82                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   2c712345143f5       kube-controller-manager-addons-363100
	59dd09b9f5f08       91be940803172                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   c40f34bc6677a       kube-apiserver-addons-363100
	24f0d56641e26       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   8e189b28220b4       etcd-addons-363100
	126f2be4e9c10       a52dc94f0a912                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   6a8a4daccca03       kube-scheduler-addons-363100
	
	
	==> controller_ingress [a9deff69272e] <==
	W0520 10:28:00.447740       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0520 10:28:00.448109       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0520 10:28:00.454944       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.1" state="clean" commit="6911225c3f747e1cd9d109c305436d08b668f086" platform="linux/amd64"
	I0520 10:28:01.371660       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0520 10:28:01.414731       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0520 10:28:01.434251       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0520 10:28:01.461103       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"88403f15-62d9-4261-ad13-4e21213ac906", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0520 10:28:01.472573       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"81bc3d20-2885-4c7b-8c2d-132d04b74c31", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0520 10:28:01.472774       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"372429d8-e164-4c0d-a84a-6836b696454f", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0520 10:28:02.637162       7 nginx.go:307] "Starting NGINX process"
	I0520 10:28:02.637334       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0520 10:28:02.640108       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0520 10:28:02.640576       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0520 10:28:02.742357       7 controller.go:210] "Backend successfully reloaded"
	I0520 10:28:02.742463       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0520 10:28:02.742668       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-zbcwl", UID:"2b0fdc07-e4e6-474c-8c8f-ddcfe6cf92fb", APIVersion:"v1", ResourceVersion:"1240", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0520 10:28:03.204024       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0520 10:28:03.204451       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-zbcwl"
	I0520 10:28:03.211835       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-zbcwl" node="addons-363100"
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [d03ba69cffb9] <==
	Trace[1530082983]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30013ms (10:25:40.194)
	Trace[1530082983]: [30.013830461s] [30.013830461s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1230459089]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:25:10.193) (total time: 30001ms):
	Trace[1230459089]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:25:40.194)
	Trace[1230459089]: [30.001366364s] [30.001366364s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42028 - 30236 "HINFO IN 5946131719620010520.8273304716792700404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.19156261s
	[INFO] 10.244.0.7:52415 - 38888 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0003435s
	[INFO] 10.244.0.7:52415 - 56021 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0002395s
	[INFO] 10.244.0.7:51948 - 40662 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002346s
	[INFO] 10.244.0.7:51948 - 55765 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001201s
	[INFO] 10.244.0.7:47236 - 57195 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0003152s
	[INFO] 10.244.0.7:47236 - 61540 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000360899s
	[INFO] 10.244.0.7:46490 - 47069 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000977s
	[INFO] 10.244.0.7:46490 - 65240 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000994s
	[INFO] 10.244.0.22:34142 - 39566 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523s
	[INFO] 10.244.0.22:41517 - 27079 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001839s
	[INFO] 10.244.0.22:47032 - 403 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001021s
	[INFO] 10.244.0.26:46696 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0005306s
	[INFO] 10.244.0.26:60649 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0002675s
	
	
	==> coredns [f352c62c5f91] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[991026151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:25:10.071) (total time: 30000ms):
	Trace[991026151]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:25:40.072)
	Trace[991026151]: [30.000969689s] [30.000969689s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[39099582]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:25:10.071) (total time: 30001ms):
	Trace[39099582]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:25:40.073)
	Trace[39099582]: [30.001949389s] [30.001949389s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.7:42977 - 62154 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0004605s
	[INFO] 10.244.0.7:42977 - 4553 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000289s
	[INFO] 10.244.0.7:46024 - 27039 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0003005s
	[INFO] 10.244.0.7:46024 - 2210 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000187899s
	[INFO] 10.244.0.7:36989 - 2663 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223599s
	[INFO] 10.244.0.7:36989 - 51297 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112s
	[INFO] 10.244.0.7:41608 - 41201 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002287s
	[INFO] 10.244.0.7:41608 - 21454 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001921s
	[INFO] 10.244.0.22:51003 - 19364 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460601s
	[INFO] 10.244.0.22:59130 - 19801 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100401s
	[INFO] 10.244.0.22:46443 - 32047 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001441s
	[INFO] 10.244.0.22:55706 - 23435 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002082803s
	[INFO] 10.244.0.22:51635 - 54056 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002239303s
	
	
	==> describe nodes <==
	Name:               addons-363100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-363100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-363100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T03_24_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-363100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-363100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:24:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-363100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:29:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:28:48 +0000   Mon, 20 May 2024 10:24:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:28:48 +0000   Mon, 20 May 2024 10:24:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:28:48 +0000   Mon, 20 May 2024 10:24:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:28:48 +0000   Mon, 20 May 2024 10:24:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.240.77
	  Hostname:    addons-363100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 a486b6ed03114769bd624d7b4aa6f07d
	  System UUID:                a0ea042b-527b-1346-bce8-c22ba4a019a0
	  Boot ID:                    1d452c2e-20d4-4665-91c6-2f34ab534856
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-62drz      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  gadget                      gadget-cvvhl                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  gcp-auth                    gcp-auth-5db96cd9b4-v5wrs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-zbcwl    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m42s
	  kube-system                 coredns-7db6d8ff4d-g9sxx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m14s
	  kube-system                 coredns-7db6d8ff4d-vwvqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m13s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 csi-hostpathplugin-xqndx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-addons-363100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-apiserver-addons-363100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-addons-363100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-czj7g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-addons-363100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 nvidia-device-plugin-daemonset-bvjl2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 snapshot-controller-745499f584-l2ttj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 snapshot-controller-745499f584-q74px         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  local-path-storage          local-path-provisioner-8d985888d-778sw       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-t959q              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             458Mi (11%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  Starting                 4m29s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m29s  kubelet          Node addons-363100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s  kubelet          Node addons-363100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s  kubelet          Node addons-363100 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m28s  kubelet          Node addons-363100 status is now: NodeReady
	  Normal  RegisteredNode           4m16s  node-controller  Node addons-363100 event: Registered Node addons-363100 in Controller
	
	
	==> dmesg <==
	[  +5.016604] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.236101] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.250059] kauditd_printk_skb: 120 callbacks suppressed
	[ +12.877599] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.376171] kauditd_printk_skb: 4 callbacks suppressed
	[May20 10:26] kauditd_printk_skb: 4 callbacks suppressed
	[May20 10:27] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.159449] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.331082] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.012019] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.957970] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.776458] hrtimer: interrupt took 4243200 ns
	[  +2.422260] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.006259] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.413178] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.454954] kauditd_printk_skb: 2 callbacks suppressed
	[May20 10:28] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.973685] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.589399] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.040072] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.009024] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.313306] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.433450] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.792903] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.523574] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [24f0d56641e2] <==
	{"level":"info","ts":"2024-05-20T10:28:03.198778Z","caller":"traceutil/trace.go:171","msg":"trace[1063985858] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"453.521473ms","start":"2024-05-20T10:28:02.745243Z","end":"2024-05-20T10:28:03.198765Z","steps":["trace[1063985858] 'process raft request'  (duration: 96.145942ms)","trace[1063985858] 'compare'  (duration: 348.188417ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:28:03.198852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:28:02.745168Z","time spent":"453.657074ms","remote":"127.0.0.1:48428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":783,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-zbcwl.17d12b9c2b1346e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-768f948f8f-zbcwl.17d12b9c2b1346e1\" value_size:676 lease:6798341509912146181 >> failure:<>"}
	{"level":"info","ts":"2024-05-20T10:28:11.617276Z","caller":"traceutil/trace.go:171","msg":"trace[1917904658] transaction","detail":"{read_only:false; response_revision:1312; number_of_response:1; }","duration":"265.106649ms","start":"2024-05-20T10:28:11.352149Z","end":"2024-05-20T10:28:11.617255Z","steps":["trace[1917904658] 'process raft request'  (duration: 256.457757ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:28:14.505182Z","caller":"traceutil/trace.go:171","msg":"trace[1282129764] linearizableReadLoop","detail":"{readStateIndex:1390; appliedIndex:1389; }","duration":"212.457901ms","start":"2024-05-20T10:28:14.292705Z","end":"2024-05-20T10:28:14.505163Z","steps":["trace[1282129764] 'read index received'  (duration: 212.129701ms)","trace[1282129764] 'applied index is now lower than readState.Index'  (duration: 327.5µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:28:14.505596Z","caller":"traceutil/trace.go:171","msg":"trace[1650180077] transaction","detail":"{read_only:false; response_revision:1327; number_of_response:1; }","duration":"216.531197ms","start":"2024-05-20T10:28:14.289055Z","end":"2024-05-20T10:28:14.505586Z","steps":["trace[1650180077] 'process raft request'  (duration: 216.001898ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:14.505822Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.0983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-bvjl2.17d12b7615f9c9a6\" ","response":"range_response_count:1 size:859"}
	{"level":"info","ts":"2024-05-20T10:28:14.505851Z","caller":"traceutil/trace.go:171","msg":"trace[2100956697] range","detail":"{range_begin:/registry/events/kube-system/nvidia-device-plugin-daemonset-bvjl2.17d12b7615f9c9a6; range_end:; response_count:1; response_revision:1327; }","duration":"213.1592ms","start":"2024-05-20T10:28:14.292684Z","end":"2024-05-20T10:28:14.505843Z","steps":["trace[2100956697] 'agreement among raft nodes before linearized reading'  (duration: 213.0486ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:14.506101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.320063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:28:14.506558Z","caller":"traceutil/trace.go:171","msg":"trace[1682300984] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1327; }","duration":"146.874462ms","start":"2024-05-20T10:28:14.359673Z","end":"2024-05-20T10:28:14.506548Z","steps":["trace[1682300984] 'agreement among raft nodes before linearized reading'  (duration: 146.403163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:22.060535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.109864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-20T10:28:22.06107Z","caller":"traceutil/trace.go:171","msg":"trace[184914296] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1354; }","duration":"227.685364ms","start":"2024-05-20T10:28:21.833369Z","end":"2024-05-20T10:28:22.061054Z","steps":["trace[184914296] 'count revisions from in-memory index tree'  (duration: 227.041764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:22.060567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"493.827605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-05-20T10:28:22.061754Z","caller":"traceutil/trace.go:171","msg":"trace[214220733] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1354; }","duration":"495.125004ms","start":"2024-05-20T10:28:21.566618Z","end":"2024-05-20T10:28:22.061743Z","steps":["trace[214220733] 'range keys from in-memory index tree'  (duration: 493.689705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:22.062088Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:28:21.566603Z","time spent":"495.471404ms","remote":"127.0.0.1:48606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-05-20T10:28:22.060661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.487845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12930"}
	{"level":"info","ts":"2024-05-20T10:28:22.062713Z","caller":"traceutil/trace.go:171","msg":"trace[156012318] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1354; }","duration":"428.582844ms","start":"2024-05-20T10:28:21.63412Z","end":"2024-05-20T10:28:22.062703Z","steps":["trace[156012318] 'range keys from in-memory index tree'  (duration: 426.262545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:22.063003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:28:21.634105Z","time spent":"428.885544ms","remote":"127.0.0.1:48536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":12953,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-05-20T10:28:37.422796Z","caller":"traceutil/trace.go:171","msg":"trace[1046270805] transaction","detail":"{read_only:false; response_revision:1458; number_of_response:1; }","duration":"500.650389ms","start":"2024-05-20T10:28:36.922123Z","end":"2024-05-20T10:28:37.422774Z","steps":["trace[1046270805] 'process raft request'  (duration: 423.585767ms)","trace[1046270805] 'compare'  (duration: 74.295018ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:28:37.423544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:28:36.922104Z","time spent":"500.775889ms","remote":"127.0.0.1:48606","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-363100\" mod_revision:1397 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-363100\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-363100\" > >"}
	{"level":"info","ts":"2024-05-20T10:28:37.424207Z","caller":"traceutil/trace.go:171","msg":"trace[1236793658] linearizableReadLoop","detail":"{readStateIndex:1528; appliedIndex:1527; }","duration":"323.87051ms","start":"2024-05-20T10:28:37.100322Z","end":"2024-05-20T10:28:37.424192Z","steps":["trace[1236793658] 'read index received'  (duration: 245.309186ms)","trace[1236793658] 'applied index is now lower than readState.Index'  (duration: 78.386224ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:28:37.424822Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.672518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-20T10:28:37.424877Z","caller":"traceutil/trace.go:171","msg":"trace[1116755910] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1458; }","duration":"138.758319ms","start":"2024-05-20T10:28:37.286109Z","end":"2024-05-20T10:28:37.424868Z","steps":["trace[1116755910] 'agreement among raft nodes before linearized reading'  (duration: 138.663719ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:37.425205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.880112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-05-20T10:28:37.425258Z","caller":"traceutil/trace.go:171","msg":"trace[1023790024] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1458; }","duration":"324.962912ms","start":"2024-05-20T10:28:37.100287Z","end":"2024-05-20T10:28:37.42525Z","steps":["trace[1023790024] 'agreement among raft nodes before linearized reading'  (duration: 324.849812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:28:37.425374Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:28:37.100271Z","time spent":"325.003912ms","remote":"127.0.0.1:48516","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":1005,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" "}
	
	
	==> gcp-auth [bf223ff8ebf8] <==
	2024/05/20 10:28:03 GCP Auth Webhook started!
	2024/05/20 10:28:05 Ready to marshal response ...
	2024/05/20 10:28:05 Ready to write response ...
	2024/05/20 10:28:05 Ready to marshal response ...
	2024/05/20 10:28:05 Ready to write response ...
	2024/05/20 10:28:07 Ready to marshal response ...
	2024/05/20 10:28:07 Ready to write response ...
	2024/05/20 10:28:14 Ready to marshal response ...
	2024/05/20 10:28:14 Ready to write response ...
	2024/05/20 10:28:31 Ready to marshal response ...
	2024/05/20 10:28:31 Ready to write response ...
	2024/05/20 10:28:39 Ready to marshal response ...
	2024/05/20 10:28:39 Ready to write response ...
	2024/05/20 10:28:52 Ready to marshal response ...
	2024/05/20 10:28:52 Ready to write response ...
	
	
	==> kernel <==
	 10:29:11 up 6 min,  0 users,  load average: 2.40, 2.81, 1.36
	Linux addons-363100 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [59dd09b9f5f0] <==
	Trace[237961003]: ["List(recursive=true) etcd3" audit-id:3f47b1cd-057c-4834-bd71-ce31fe222b36,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 512ms (10:27:36.787)]
	Trace[237961003]: [512.250497ms] [512.250497ms] END
	I0520 10:27:59.912360       1 trace.go:236] Trace[145701228]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5086dbe4-d8ac-4feb-b86e-2d307eeda478,client:172.25.240.77,api-group:,api-version:v1,name:gadget-cvvhl.17d12b936c6aaea8,subresource:,namespace:gadget,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/gadget/events/gadget-cvvhl.17d12b936c6aaea8,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PATCH (20-May-2024 10:27:59.288) (total time: 623ms):
	Trace[145701228]: ["GuaranteedUpdate etcd3" audit-id:5086dbe4-d8ac-4feb-b86e-2d307eeda478,key:/events/gadget/gadget-cvvhl.17d12b936c6aaea8,type:*core.Event,resource:events 623ms (10:27:59.288)
	Trace[145701228]:  ---"Txn call completed" 618ms (10:27:59.912)]
	Trace[145701228]: ---"Object stored in database" 619ms (10:27:59.912)
	Trace[145701228]: [623.925629ms] [623.925629ms] END
	I0520 10:28:03.193018       1 trace.go:236] Trace[788058151]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e78fef9a-83be-43de-8847-f863aa0ae446,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:resourcequotas,scope:namespace,url:/api/v1/namespaces/ingress-nginx/resourcequotas,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (20-May-2024 10:28:02.673) (total time: 519ms):
	Trace[788058151]: ["List(recursive=true) etcd3" audit-id:e78fef9a-83be-43de-8847-f863aa0ae446,key:/resourcequotas/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 519ms (10:28:02.673)]
	Trace[788058151]: [519.564971ms] [519.564971ms] END
	I0520 10:28:03.195810       1 trace.go:236] Trace[1316834242]: "List" accept:application/json, */*,audit-id:791b56f0-3692-442c-81eb-34e61dabeabb,client:172.25.240.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (20-May-2024 10:28:02.663) (total time: 532ms):
	Trace[1316834242]: ["List(recursive=true) etcd3" audit-id:791b56f0-3692-442c-81eb-34e61dabeabb,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 532ms (10:28:02.663)]
	Trace[1316834242]: [532.32439ms] [532.32439ms] END
	I0520 10:28:03.202642       1 trace.go:236] Trace[241135450]: "Create" accept:application/json, */*,audit-id:eea2905c-3b63-4fd5-8655-ba1aae1158b4,client:10.244.0.21,api-group:coordination.k8s.io,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases,user-agent:nginx-ingress-controller/v1.10.1 (linux/amd64) ingress-nginx/4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518,verb:POST (20-May-2024 10:28:02.668) (total time: 534ms):
	Trace[241135450]: [534.163693ms] [534.163693ms] END
	I0520 10:28:11.618800       1 trace.go:236] Trace[1204324412]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.25.240.77,type:*v1.Endpoints,resource:apiServerIPInfo (20-May-2024 10:28:11.112) (total time: 505ms):
	Trace[1204324412]: ---"Transaction prepared" 212ms (10:28:11.351)
	Trace[1204324412]: ---"Txn call completed" 266ms (10:28:11.618)
	Trace[1204324412]: [505.971421ms] [505.971421ms] END
	I0520 10:28:30.772353       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 10:28:37.425726       1 trace.go:236] Trace[725867731]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:dfbfded7-b01c-4e96-946b-78a5b610a433,client:172.25.240.77,api-group:coordination.k8s.io,api-version:v1,name:addons-363100,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/addons-363100,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (20-May-2024 10:28:36.919) (total time: 505ms):
	Trace[725867731]: ["GuaranteedUpdate etcd3" audit-id:dfbfded7-b01c-4e96-946b-78a5b610a433,key:/leases/kube-node-lease/addons-363100,type:*coordination.Lease,resource:leases.coordination.k8s.io 505ms (10:28:36.920)
	Trace[725867731]:  ---"Txn call completed" 503ms (10:28:37.425)]
	Trace[725867731]: [505.640997ms] [505.640997ms] END
	I0520 10:28:52.220287       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [367dcacfa4d1] <==
	I0520 10:27:25.157064       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:25.568699       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:26.250279       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:26.340492       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:26.662659       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:26.712447       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:26.732662       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:26.777099       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:27.263994       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:27.284695       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:27.309376       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:27.334146       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:56.024327       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:56.118819       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0520 10:27:57.015392       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:27:57.062876       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0520 10:28:00.603094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="216.501µs"
	I0520 10:28:03.782361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="39.951859ms"
	I0520 10:28:03.783875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="32µs"
	I0520 10:28:15.368493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="41.765961ms"
	I0520 10:28:15.372476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="427.899µs"
	I0520 10:28:26.030469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="5.9µs"
	I0520 10:28:48.378381       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="13.2µs"
	I0520 10:28:56.549588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="7.4µs"
	I0520 10:28:58.360242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="4.4µs"
	
	
	==> kube-proxy [c0142927d7f6] <==
	I0520 10:25:08.429101       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:25:08.545227       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.240.77"]
	I0520 10:25:08.815025       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:25:08.815192       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:25:08.815234       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:25:08.896369       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:25:08.897315       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:25:08.897345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:25:08.902078       1 config.go:192] "Starting service config controller"
	I0520 10:25:08.902265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:25:08.902369       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:25:08.902382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:25:08.917035       1 config.go:319] "Starting node config controller"
	I0520 10:25:08.917061       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:25:09.040402       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:25:09.040488       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:25:09.040605       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [126f2be4e9c1] <==
	W0520 10:24:39.863610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:24:39.863662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 10:24:39.874872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:24:39.875108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:24:39.877726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:24:39.877753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:24:39.928543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:24:39.928944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:24:39.941848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:24:39.942098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:24:39.991401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:24:39.991462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:24:40.140358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:24:40.140765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:24:40.163723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:24:40.164147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:24:40.198311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:24:40.198745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:24:40.238710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:24:40.239279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:24:40.280983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:24:40.281109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:24:40.294206       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:24:40.294255       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 10:24:43.326731       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:28:58 addons-363100 kubelet[2106]: I0520 10:28:58.314481    2106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f839153-72eb-4331-b147-6db46c4d13ee" path="/var/lib/kubelet/pods/8f839153-72eb-4331-b147-6db46c4d13ee/volumes"
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.202313    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c036e4bc-1693-11ef-a124-eaf51ef7bbdf\") pod \"217d6448-31a8-4b82-83f0-eedefc0e9126\" (UID: \"217d6448-31a8-4b82-83f0-eedefc0e9126\") "
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.202373    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsrcb\" (UniqueName: \"kubernetes.io/projected/217d6448-31a8-4b82-83f0-eedefc0e9126-kube-api-access-zsrcb\") pod \"217d6448-31a8-4b82-83f0-eedefc0e9126\" (UID: \"217d6448-31a8-4b82-83f0-eedefc0e9126\") "
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.202402    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/217d6448-31a8-4b82-83f0-eedefc0e9126-gcp-creds\") pod \"217d6448-31a8-4b82-83f0-eedefc0e9126\" (UID: \"217d6448-31a8-4b82-83f0-eedefc0e9126\") "
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.202491    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217d6448-31a8-4b82-83f0-eedefc0e9126-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "217d6448-31a8-4b82-83f0-eedefc0e9126" (UID: "217d6448-31a8-4b82-83f0-eedefc0e9126"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.209428    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217d6448-31a8-4b82-83f0-eedefc0e9126-kube-api-access-zsrcb" (OuterVolumeSpecName: "kube-api-access-zsrcb") pod "217d6448-31a8-4b82-83f0-eedefc0e9126" (UID: "217d6448-31a8-4b82-83f0-eedefc0e9126"). InnerVolumeSpecName "kube-api-access-zsrcb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.212038    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^c036e4bc-1693-11ef-a124-eaf51ef7bbdf" (OuterVolumeSpecName: "task-pv-storage") pod "217d6448-31a8-4b82-83f0-eedefc0e9126" (UID: "217d6448-31a8-4b82-83f0-eedefc0e9126"). InnerVolumeSpecName "pvc-60603796-e910-4bac-a8da-5063a48368d2". PluginName "kubernetes.io/csi", VolumeGidValue ""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.303549    2106 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-60603796-e910-4bac-a8da-5063a48368d2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c036e4bc-1693-11ef-a124-eaf51ef7bbdf\") on node \"addons-363100\" "
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.303583    2106 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zsrcb\" (UniqueName: \"kubernetes.io/projected/217d6448-31a8-4b82-83f0-eedefc0e9126-kube-api-access-zsrcb\") on node \"addons-363100\" DevicePath \"\""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.303596    2106 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/217d6448-31a8-4b82-83f0-eedefc0e9126-gcp-creds\") on node \"addons-363100\" DevicePath \"\""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.314854    2106 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-60603796-e910-4bac-a8da-5063a48368d2" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^c036e4bc-1693-11ef-a124-eaf51ef7bbdf") on node "addons-363100"
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.403946    2106 reconciler_common.go:289] "Volume detached for volume \"pvc-60603796-e910-4bac-a8da-5063a48368d2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c036e4bc-1693-11ef-a124-eaf51ef7bbdf\") on node \"addons-363100\" DevicePath \"\""
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.595983    2106 scope.go:117] "RemoveContainer" containerID="4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7"
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.637046    2106 scope.go:117] "RemoveContainer" containerID="4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7"
	May 20 10:29:00 addons-363100 kubelet[2106]: E0520 10:29:00.640230    2106 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7" containerID="4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7"
	May 20 10:29:00 addons-363100 kubelet[2106]: I0520 10:29:00.640278    2106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7"} err="failed to get container status \"4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4252778ffb4be5220d097c9df1a57bb1a66f3846faced38fbf4bdb3ae3ce6ac7"
	May 20 10:29:02 addons-363100 kubelet[2106]: I0520 10:29:02.324619    2106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217d6448-31a8-4b82-83f0-eedefc0e9126" path="/var/lib/kubelet/pods/217d6448-31a8-4b82-83f0-eedefc0e9126/volumes"
	May 20 10:29:05 addons-363100 kubelet[2106]: I0520 10:29:05.289206    2106 scope.go:117] "RemoveContainer" containerID="c24efba295b18964ba5f6a935126ac734027587ee16b48ed78e12bfdfd135cd8"
	May 20 10:29:07 addons-363100 kubelet[2106]: I0520 10:29:07.858423    2106 scope.go:117] "RemoveContainer" containerID="c24efba295b18964ba5f6a935126ac734027587ee16b48ed78e12bfdfd135cd8"
	May 20 10:29:07 addons-363100 kubelet[2106]: I0520 10:29:07.858869    2106 scope.go:117] "RemoveContainer" containerID="84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033"
	May 20 10:29:07 addons-363100 kubelet[2106]: E0520 10:29:07.859389    2106 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-cvvhl_gadget(974e73bb-2679-4b4b-ba1b-5b3a6518ee12)\"" pod="gadget/gadget-cvvhl" podUID="974e73bb-2679-4b4b-ba1b-5b3a6518ee12"
	May 20 10:29:09 addons-363100 kubelet[2106]: I0520 10:29:09.133618    2106 scope.go:117] "RemoveContainer" containerID="84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033"
	May 20 10:29:09 addons-363100 kubelet[2106]: E0520 10:29:09.134268    2106 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-cvvhl_gadget(974e73bb-2679-4b4b-ba1b-5b3a6518ee12)\"" pod="gadget/gadget-cvvhl" podUID="974e73bb-2679-4b4b-ba1b-5b3a6518ee12"
	May 20 10:29:09 addons-363100 kubelet[2106]: I0520 10:29:09.967222    2106 scope.go:117] "RemoveContainer" containerID="84e231a7dda24e219b4d00abd5a8b07c1e60d08fd7a210636d11d80b17c92033"
	May 20 10:29:09 addons-363100 kubelet[2106]: E0520 10:29:09.967698    2106 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-cvvhl_gadget(974e73bb-2679-4b4b-ba1b-5b3a6518ee12)\"" pod="gadget/gadget-cvvhl" podUID="974e73bb-2679-4b4b-ba1b-5b3a6518ee12"
	
	
	==> storage-provisioner [2a33a55117c4] <==
	I0520 10:25:30.446148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:25:30.531371       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:25:30.537481       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:25:30.564333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:25:30.564579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-363100_f50b450c-6bc5-4565-8b51-654ff44cabee!
	I0520 10:25:30.565606       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d852154e-5012-4d61-b1e6-9a42c943202b", APIVersion:"v1", ResourceVersion:"740", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-363100_f50b450c-6bc5-4565-8b51-654ff44cabee became leader
	I0520 10:25:30.767287       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-363100_f50b450c-6bc5-4565-8b51-654ff44cabee!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:29:02.246260    3940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-363100 -n addons-363100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-363100 -n addons-363100: (13.0071133s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-363100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2jfb5 ingress-nginx-admission-patch-xgccn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-363100 describe pod ingress-nginx-admission-create-2jfb5 ingress-nginx-admission-patch-xgccn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-363100 describe pod ingress-nginx-admission-create-2jfb5 ingress-nginx-admission-patch-xgccn: exit status 1 (174.4904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2jfb5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xgccn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-363100 describe pod ingress-nginx-admission-create-2jfb5 ingress-nginx-admission-patch-xgccn: exit status 1
--- FAIL: TestAddons/parallel/Registry (82.03s)

                                                
                                    
x
+
TestCertOptions (10800.393s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-975600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-975600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m14.8956333s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-975600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-975600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.5188421s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-975600 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-975600 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-975600 -- "sudo cat /etc/kubernetes/admin.conf": (10.3156476s)
helpers_test.go:175: Cleaning up "cert-options-975600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-975600
E0520 06:20:25.065321    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestCertExpiration (10m15s)
	TestCertOptions (6m59s)
	TestNetworkPlugins (12m57s)
	TestStartStop (24m4s)
	TestStartStop/group/no-preload (2m16s)
	TestStartStop/group/no-preload/serial (2m16s)
	TestStartStop/group/no-preload/serial/FirstStart (2m16s)
	TestStartStop/group/old-k8s-version (6m57s)
	TestStartStop/group/old-k8s-version/serial (6m57s)
	TestStartStop/group/old-k8s-version/serial/FirstStart (6m57s)

                                                
                                                
goroutine 2327 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000014b60, 0xc00079bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008342a0, {0x51aba40, 0x2a, 0x2a}, {0x2e7ef1a?, 0xcc806f?, 0x51ced00?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007bf860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007bf860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000071680)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2100 [chan receive, 25 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000151e0, 0x3895b58)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2028
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 45 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 44
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 843 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3e0f0e0, 0xc000054060}, 0xc0008d5f50, 0xc0008d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3e0f0e0, 0xc000054060}, 0x11?, 0xc0008d5f50, 0xc0008d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3e0f0e0?, 0xc000054060?}, 0xc00130aea0?, 0xd57c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd58bc5?, 0xc00130aea0?, 0xc00159e140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 844 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 843
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 842 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008fd050, 0x36)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x291a0e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001446cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008fd080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000067910, {0x3deb6a0, 0xc001ccef90}, 0x1, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000067910, 0x3b9aca00, 0x0, 0x1, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2105 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000015ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000015ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000015ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000015ba0, 0xc001b7c200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 691 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x2097be38fe0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc1fdd6?, 0x525c160?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0013b7420, 0xc001f03bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0013b7408, 0x320, {0xc00073a3c0?, 0x0?, 0x0?}, 0xc0000a9808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0013b7408, 0xc001f03d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0013b7408)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc001426460)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001426460)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005f80f0, {0x3e02180, 0xc001426460})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0005f80f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc001b64340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 678
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2302 [syscall, locked to thread]:
syscall.SyscallN(0xc88b6a?, {0xc00123bb20?, 0xc27ea5?, 0x525c160?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0002a3c00?, 0xc00123bb80?, 0xc1fdd6?, 0x525c160?, 0xc00123bc08?, 0xc1281b?, 0xc08ba6?, 0xc0008e4035?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7b8, {0xc0007c645c?, 0x3a4, 0xcc417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0011f3188?, {0xc0007c645c?, 0xc45170?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0011f3188, {0xc0007c645c, 0x3a4, 0x3a4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000003320, {0xc0007c645c?, 0xc000604e00?, 0x2b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018d03f0, {0x3dea260, 0xc0019dc0a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc0018d03f0}, {0x3dea260, 0xc0019dc0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00123be78?, {0x3dea3a0, 0xc0018d03f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc0018d03f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc0018d03f0}, {0x3dea320, 0xc000003320}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00147c240?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 610
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 44 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3e0f0e0, 0xc000054060}, 0xc001237f50, 0xc001237f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3e0f0e0, 0xc000054060}, 0x60?, 0xc001237f50, 0xc001237f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3e0f0e0?, 0xc000054060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xd9e3a5?, 0xc00097a000?, 0xc0008be660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 168
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2309 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00130b520, {0x2e2de50?, 0x60400000004?}, 0xc001996380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00130b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00130b520, 0xc001996300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2104
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 43 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000935ed0, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x291a0e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001220c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000935f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000662330, {0x3deb6a0, 0xc0008e2540}, 0x1, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000662330, 0x3b9aca00, 0x0, 0x1, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 168
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 610 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ff8c1764de0?, {0xc0008f16a0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x770, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001620c90)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001c426e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001c426e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008684e0, 0xc001c426e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.Cleanup(0xc0008684e0, {0xc001e24000, 0x13}, 0xc001380540)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:178 +0x15f
k8s.io/minikube/test/integration.CleanupWithLogs(0xc0008684e0, {0xc001e24000, 0x13}, 0xc001380540)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:192 +0x19d
k8s.io/minikube/test/integration.TestCertOptions(0xc0008684e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:109 +0x1090
testing.tRunner(0xc0008684e0, 0x3895858)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2162 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00130a000, 0xc001910018)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1992
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2164 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130b1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00130b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00130b1e0, 0xc001996180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2179 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b64ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b64ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b64ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b64ea0, 0xc000676800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 167 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001220d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 91
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 168 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000935f00, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 91
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2028 [chan receive, 25 minutes]:
testing.(*T).Run(0xc000869380, {0x2e232d1?, 0xd57333?}, 0x3895b58)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000869380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000869380, 0x3895980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2160 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b649c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b649c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b649c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b649c0, 0xc000676600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130a680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00130a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00130a680, 0xc001996100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2104 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000015a00, {0x2e247df?, 0x0?}, 0xc001996300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000015a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000015a00, 0xc001b7c1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1027 [chan send, 145 minutes]:
os/exec.(*Cmd).watchCtx(0xc00127a6e0, 0xc000055ec0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 818
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2311 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc000477b20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc12cf9?, 0xc00078e800?, 0x400?, 0x20956930a28?, 0xc000477c08?, 0xc1288a?, 0x20956930a28?, 0xc00078e800?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x494, {0xc00078e9f8?, 0x208, 0xcc417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000428788?, {0xc00078e9f8?, 0x0?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000428788, {0xc00078e9f8, 0x208, 0x208})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007001c8, {0xc00078e9f8?, 0x2097bec0288?, 0x6d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00145c210, {0x3dea260, 0xc0012c2000})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc00145c210}, {0x3dea260, 0xc0012c2000}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x50d19e0?, {0x3dea3a0, 0xc00145c210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc00145c210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc00145c210}, {0x3dea320, 0xc0007001c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x3895858?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2310
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2178 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b64d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b64d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b64d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b64d00, 0xc000676780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2175 [syscall, locked to thread]:
syscall.SyscallN(0x2097c16e398?, {0xc001b37b20?, 0xc27ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2097c16e398?, 0xc001b37b80?, 0xc1fdd6?, 0x525c160?, 0xc001b37c08?, 0xc12985?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6cc, {0xc0012ec1dc?, 0x7e24, 0xcc417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001649908?, {0xc0012ec1dc?, 0xc4c1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001649908, {0xc0012ec1dc, 0x7e24, 0x7e24})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019dc0b0, {0xc0012ec1dc?, 0x418e?, 0x7ebf?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0019125a0, {0x3dea260, 0xc00011c828})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc0019125a0}, {0x3dea260, 0xc00011c828}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dea3a0, 0xc0019125a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc0019125a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc0019125a0}, {0x3dea320, 0xc0019dc0b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x3895898?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2173
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2157 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b644e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b644e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b644e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b644e0, 0xc000676400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1218 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c42580, 0xc001d396e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1185
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 611 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ff8c1764de0?, {0xc001cf99a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x32c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000800ed0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00096d340)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00096d340)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000868680, 0xc00096d340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc000868680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:131 +0x576
testing.tRunner(0xc000868680, 0x3895850)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2298 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00096d340, 0xc000106d80)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 611
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2304 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c426e0, 0xc0008be780)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 610
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 850 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001446de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2176 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c42160, 0xc00147c2a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2173
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1992 [chan receive, 13 minutes]:
testing.(*T).Run(0xc000014820, {0x2e232d1?, 0xc7f48d?}, 0xc001910018)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000014820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000014820, 0x3895938)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 851 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008fd080, 0xc000054060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2106 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000015d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000015d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000015d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000015d40, 0xc001b7c280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2161 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b64b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b64b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b64b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b64b60, 0xc000676700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2159 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b64820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b64820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b64820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b64820, 0xc000676580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2158 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b64680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b64680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b64680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b64680, 0xc000676500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2312 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x2097c16e758?, {0xc001b35b20?, 0xc27ea5?, 0x4?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2097c16e758?, 0xc001b35b80?, 0xc1fdd6?, 0x525c160?, 0xc001b35c08?, 0xc12985?, 0x20956930a28?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6b8, {0xc00092f568?, 0x2a98, 0xcc417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000428c88?, {0xc00092f568?, 0x0?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000428c88, {0xc00092f568, 0x2a98, 0x2a98})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000700220, {0xc00092f568?, 0x2097bea9380?, 0x3e91?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00145c240, {0x3dea260, 0xc000002008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc00145c240}, {0x3dea260, 0xc000002008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dea3a0, 0xc00145c240})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc00145c240?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc00145c240}, {0x3dea320, 0xc000700220}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0008ba840?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2310
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2310 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ff8c1764de0?, {0xc001739ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x640, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0008008a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000726000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000726000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00130b6c0, 0xc000726000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3e0ef20?, 0xc000740310?}, 0xc00130b6c0, {0xc001322708?, 0x664b4da4?}, {0xc02462cd74?, 0xc001739f60?}, {0xd57333?, 0xca8d6f?}, {0xc0013e0000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00130b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00130b6c0, 0xc001996380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2309
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2101 [chan receive, 8 minutes]:
testing.(*T).Run(0xc000015380, {0x2e247df?, 0x0?}, 0xc00145e080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000015380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000015380, 0xc001b7c100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2102 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000156c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000156c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000156c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0000156c0, 0xc001b7c140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2313 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000726000, 0xc000106f00)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2310
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2103 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0006245a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000015860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000015860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000015860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000015860, 0xc001b7c180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2100
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2297 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc000973b20?, 0xc27ea5?, 0x525c160?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000973b41?, 0xc000973b80?, 0xc1fdd6?, 0x525c160?, 0xc000973c08?, 0xc1281b?, 0xc08ba6?, 0xc000973b41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x538, {0xc00093353a?, 0x2c6, 0xc000933400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006cbb88?, {0xc00093353a?, 0xc4c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006cbb88, {0xc00093353a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000700c10, {0xc00093353a?, 0xc000973d98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00145c1b0, {0x3dea260, 0xc0012c2198})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc00145c1b0}, {0x3dea260, 0xc0012c2198}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dea3a0, 0xc00145c1b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc00145c1b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc00145c1b0}, {0x3dea320, 0xc000700c10}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000106cc0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 611
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2174 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc000108b70?, {0xc00089bb20?, 0xc27ea5?, 0x525c160?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0000dafc0?, 0xc00089bb80?, 0xc1fdd6?, 0x525c160?, 0xc00089bc08?, 0xc12985?, 0x20956930598?, 0xc0000db54d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x658, {0xc0018fc246?, 0x5ba, 0xcc417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001649408?, {0xc0018fc246?, 0xc4c171?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001649408, {0xc0018fc246, 0x5ba, 0x5ba})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019dc098, {0xc0018fc246?, 0xc00089bd98?, 0x207?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001912570, {0x3dea260, 0xc0008c6208})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc001912570}, {0x3dea260, 0xc0008c6208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dea3a0, 0xc001912570})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc001912570?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc001912570}, {0x3dea320, 0xc0019dc098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000235390?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2173
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2173 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ff8c1764de0?, {0xc00089dae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5a4, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001548db0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001c42160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001c42160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00130b380, 0xc001c42160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3e0ef20?, 0xc00049c380?}, 0xc00130b380, {0xc001e24f60?, 0x664b4c8b?}, {0xc027761e94?, 0xc00089df60?}, {0xd57333?, 0xca8d6f?}, {0xc0008c40c0, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00130b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00130b380, 0xc00145e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2172
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2303 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc00199a410?, {0xc000791b20?, 0xc27ea5?, 0x525c160?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000791d30?, 0xc000791b80?, 0xc1fdd6?, 0x525c160?, 0xc000791c08?, 0xc1281b?, 0xc08ba6?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2ec, {0xc00093293a?, 0x2c6, 0xc000932800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0011f3688?, {0xc00093293a?, 0xc4c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0011f3688, {0xc00093293a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000003358, {0xc00093293a?, 0xc000791d98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018d0420, {0x3dea260, 0xc0007008a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc0018d0420}, {0x3dea260, 0xc0007008a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3dea3a0, 0xc0018d0420})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc0018d0420?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc0018d0420}, {0x3dea320, 0xc000003358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0009220c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 610
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2172 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00130aea0, {0x2e2de50?, 0x60400000004?}, 0xc00145e100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00130aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00130aea0, 0xc00145e080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2101
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2296 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001b75b20?, 0xc0005f81e0?, 0xf?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001b75ba0?, 0xd9c799?, 0xc000112219?, 0x1e?, 0xc001b75c08?, 0xc1281b?, 0xc0013542a0?, 0xc000726580?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x558, {0xc0013f5205?, 0x5fb, 0xc0013f5000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006cb688?, {0xc0013f5205?, 0x13?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006cb688, {0xc0013f5205, 0x5fb, 0x5fb})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000700aa8, {0xc0013f5205?, 0xc001217a40?, 0x205?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00145c180, {0x3dea260, 0xc0008c6140})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3dea3a0, 0xc00145c180}, {0x3dea260, 0xc0008c6140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b75e78?, {0x3dea3a0, 0xc00145c180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x515fb60?, {0x3dea3a0?, 0xc00145c180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3dea3a0, 0xc00145c180}, {0x3dea320, 0xc000700aa8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0008be5a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 611
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    
x
+
TestErrorSpam/setup (200.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-644700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 --driver=hyperv
E0520 03:33:04.556603    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.571994    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.588031    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.619685    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.667669    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.761733    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:04.936220    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:05.270693    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:05.925125    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:07.215646    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:09.780654    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:14.904158    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:25.158095    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:33:45.648467    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:34:26.616630    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:35:48.538444    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-644700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 --driver=hyperv: (3m20.375482s)
error_spam_test.go:96: unexpected stderr: "W0520 03:32:51.939413    8640 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1"
error_spam_test.go:110: minikube stdout:
* [nospam-644700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=18925
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-644700" primary control-plane node in "nospam-644700" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-644700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0520 03:32:51.939413    8640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
--- FAIL: TestErrorSpam/setup (200.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (35.23s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-379700 -n functional-379700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-379700 -n functional-379700: (12.4764037s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 logs -n 25: (8.8035448s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:37 PDT | 20 May 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:37 PDT | 20 May 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:37 PDT | 20 May 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:37 PDT | 20 May 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:37 PDT | 20 May 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:38 PDT | 20 May 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-644700 --log_dir                                     | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:38 PDT | 20 May 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-644700                                            | nospam-644700     | minikube1\jenkins | v1.33.1 | 20 May 24 03:38 PDT | 20 May 24 03:39 PDT |
	| start   | -p functional-379700                                        | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:39 PDT | 20 May 24 03:43 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-379700                                        | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:43 PDT | 20 May 24 03:45 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache add                                 | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:45 PDT | 20 May 24 03:45 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache add                                 | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:45 PDT | 20 May 24 03:45 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache add                                 | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:45 PDT | 20 May 24 03:45 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache add                                 | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:45 PDT | 20 May 24 03:46 PDT |
	|         | minikube-local-cache-test:functional-379700                 |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache delete                              | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | minikube-local-cache-test:functional-379700                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	| ssh     | functional-379700 ssh sudo                                  | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-379700                                           | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-379700 ssh                                       | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-379700 cache reload                              | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	| ssh     | functional-379700 ssh                                       | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-379700 kubectl --                                | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:46 PDT | 20 May 24 03:46 PDT |
	|         | --context functional-379700                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:43:19
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:43:19.516207    5028 out.go:291] Setting OutFile to fd 752 ...
	I0520 03:43:19.520302    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:19.520302    5028 out.go:304] Setting ErrFile to fd 1004...
	I0520 03:43:19.520302    5028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:19.546721    5028 out.go:298] Setting JSON to false
	I0520 03:43:19.549717    5028 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1796,"bootTime":1716200003,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:43:19.549717    5028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:19.553724    5028 out.go:177] * [functional-379700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:43:19.557718    5028 notify.go:220] Checking for updates...
	I0520 03:43:19.559739    5028 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:43:19.562717    5028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:19.565534    5028 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:43:19.568419    5028 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:19.568419    5028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:19.573927    5028 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:19.573927    5028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:25.134315    5028 out.go:177] * Using the hyperv driver based on existing profile
	I0520 03:43:25.138163    5028 start.go:297] selected driver: hyperv
	I0520 03:43:25.138163    5028 start.go:901] validating driver "hyperv" against &{Name:functional-379700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-379700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.247.13 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:25.138163    5028 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:25.186274    5028 cni.go:84] Creating CNI manager for ""
	I0520 03:43:25.186274    5028 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:43:25.186274    5028 start.go:340] cluster config:
	{Name:functional-379700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-379700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.247.13 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:25.187038    5028 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:25.190688    5028 out.go:177] * Starting "functional-379700" primary control-plane node in "functional-379700" cluster
	I0520 03:43:25.194238    5028 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:43:25.194238    5028 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 03:43:25.194895    5028 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:25.195224    5028 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 03:43:25.195569    5028 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:43:25.195833    5028 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\config.json ...
	I0520 03:43:25.198109    5028 start.go:360] acquireMachinesLock for functional-379700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:25.198542    5028 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-379700"
	I0520 03:43:25.198677    5028 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:43:25.198677    5028 fix.go:54] fixHost starting: 
	I0520 03:43:25.199253    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:28.072312    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:28.072389    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:28.072389    5028 fix.go:112] recreateIfNeeded on functional-379700: state=Running err=<nil>
	W0520 03:43:28.072389    5028 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:43:28.077250    5028 out.go:177] * Updating the running hyperv "functional-379700" VM ...
	I0520 03:43:28.079189    5028 machine.go:94] provisionDockerMachine start ...
	I0520 03:43:28.079721    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:30.349240    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:30.349240    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:30.349323    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:33.025066    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:33.025066    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:33.031665    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:43:33.032242    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:43:33.032242    5028 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:43:33.175974    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-379700
	
	I0520 03:43:33.175974    5028 buildroot.go:166] provisioning hostname "functional-379700"
	I0520 03:43:33.175974    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:35.429598    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:35.429598    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:35.429712    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:38.169343    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:38.169398    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:38.176220    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:43:38.176849    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:43:38.176849    5028 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-379700 && echo "functional-379700" | sudo tee /etc/hostname
	I0520 03:43:38.356699    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-379700
	
	I0520 03:43:38.356699    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:40.588176    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:40.588176    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:40.588176    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:43.227122    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:43.227122    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:43.233098    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:43:43.233678    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:43:43.233678    5028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-379700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-379700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-379700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:43:43.373277    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:43:43.373277    5028 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 03:43:43.373277    5028 buildroot.go:174] setting up certificates
	I0520 03:43:43.373277    5028 provision.go:84] configureAuth start
	I0520 03:43:43.373277    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:45.599917    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:45.599917    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:45.600648    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:48.293867    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:48.293867    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:48.294273    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:50.527943    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:50.527943    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:50.527943    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:53.200682    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:53.200842    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:53.200842    5028 provision.go:143] copyHostCerts
	I0520 03:43:53.200981    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 03:43:53.201319    5028 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 03:43:53.201319    5028 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 03:43:53.201626    5028 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 03:43:53.202946    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 03:43:53.203185    5028 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 03:43:53.203260    5028 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 03:43:53.203711    5028 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 03:43:53.204616    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 03:43:53.204802    5028 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 03:43:53.204802    5028 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 03:43:53.205192    5028 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 03:43:53.206348    5028 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-379700 san=[127.0.0.1 172.25.247.13 functional-379700 localhost minikube]
	I0520 03:43:53.584555    5028 provision.go:177] copyRemoteCerts
	I0520 03:43:53.598876    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:43:53.599022    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:43:55.879089    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:43:55.879124    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:55.879180    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:43:58.593783    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:43:58.593861    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:43:58.594099    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:43:58.709387    5028 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1105055s)
	I0520 03:43:58.709387    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 03:43:58.710392    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:43:58.756081    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 03:43:58.756491    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 03:43:58.808275    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 03:43:58.808885    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 03:43:58.861728    5028 provision.go:87] duration metric: took 15.4884335s to configureAuth
	I0520 03:43:58.861728    5028 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:43:58.862608    5028 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:58.862608    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:01.171373    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:01.171432    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:01.171432    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:03.913742    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:03.913742    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:03.923830    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:44:03.924684    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:44:03.924684    5028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:44:04.074590    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:44:04.074650    5028 buildroot.go:70] root file system type: tmpfs
	I0520 03:44:04.074852    5028 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:44:04.075005    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:06.289619    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:06.289797    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:06.289797    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:08.989072    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:08.989072    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:08.995843    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:44:08.995843    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:44:08.996367    5028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:44:09.153305    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:44:09.153573    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:11.421709    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:11.421709    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:11.422780    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:14.156966    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:14.156966    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:14.162918    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:44:14.163577    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:44:14.163577    5028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:44:14.304097    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:44:14.304097    5028 machine.go:97] duration metric: took 46.2248577s to provisionDockerMachine
	I0520 03:44:14.304097    5028 start.go:293] postStartSetup for "functional-379700" (driver="hyperv")
	I0520 03:44:14.304097    5028 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:44:14.313571    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:44:14.318445    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:16.567528    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:16.567735    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:16.567735    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:19.258551    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:19.259373    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:19.259532    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:44:19.365763    5028 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0520828s)
	I0520 03:44:19.380630    5028 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:44:19.387622    5028 command_runner.go:130] > NAME=Buildroot
	I0520 03:44:19.387685    5028 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 03:44:19.387685    5028 command_runner.go:130] > ID=buildroot
	I0520 03:44:19.387685    5028 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 03:44:19.387685    5028 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 03:44:19.387987    5028 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 03:44:19.388057    5028 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 03:44:19.388485    5028 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 03:44:19.389435    5028 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 03:44:19.389435    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 03:44:19.390140    5028 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4100\hosts -> hosts in /etc/test/nested/copy/4100
	I0520 03:44:19.390140    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4100\hosts -> /etc/test/nested/copy/4100/hosts
	I0520 03:44:19.404140    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4100
	I0520 03:44:19.424269    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 03:44:19.479345    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4100\hosts --> /etc/test/nested/copy/4100/hosts (40 bytes)
	I0520 03:44:19.534346    5028 start.go:296] duration metric: took 5.2302434s for postStartSetup
	I0520 03:44:19.534488    5028 fix.go:56] duration metric: took 54.3357515s for fixHost
	I0520 03:44:19.534488    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:21.845205    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:21.845991    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:21.846061    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:24.549647    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:24.549647    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:24.554611    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:44:24.555493    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:44:24.555493    5028 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 03:44:24.688700    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201864.677776911
	
	I0520 03:44:24.688820    5028 fix.go:216] guest clock: 1716201864.677776911
	I0520 03:44:24.688820    5028 fix.go:229] Guest: 2024-05-20 03:44:24.677776911 -0700 PDT Remote: 2024-05-20 03:44:19.5344884 -0700 PDT m=+60.097236601 (delta=5.143288511s)
	I0520 03:44:24.688940    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:26.996739    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:26.996739    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:26.996739    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:29.718330    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:29.719062    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:29.726342    5028 main.go:141] libmachine: Using SSH client type: native
	I0520 03:44:29.726342    5028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.247.13 22 <nil> <nil>}
	I0520 03:44:29.726342    5028 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716201864
	I0520 03:44:29.878994    5028 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 10:44:24 UTC 2024
	
	I0520 03:44:29.878994    5028 fix.go:236] clock set: Mon May 20 10:44:24 UTC 2024
	 (err=<nil>)
	I0520 03:44:29.878994    5028 start.go:83] releasing machines lock for "functional-379700", held for 1m4.6803813s
	I0520 03:44:29.879506    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:32.170657    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:32.170657    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:32.171489    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:34.886880    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:34.886880    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:34.892061    5028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:44:34.892263    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:34.902986    5028 ssh_runner.go:195] Run: cat /version.json
	I0520 03:44:34.902986    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:44:37.312071    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:37.312071    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:37.312071    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:37.316462    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:44:37.316635    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:37.316842    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:44:40.191179    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:40.191440    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:40.191617    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:44:40.217033    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:44:40.217087    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:44:40.217087    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:44:40.293012    5028 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 03:44:40.293122    5028 ssh_runner.go:235] Completed: cat /version.json: (5.3901302s)
	W0520 03:44:40.293358    5028 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 03:44:40.306421    5028 ssh_runner.go:195] Run: systemctl --version
	I0520 03:44:40.376885    5028 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 03:44:40.376986    5028 command_runner.go:130] > systemd 252 (252)
	I0520 03:44:40.376986    5028 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 03:44:40.377095    5028 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4849536s)
	I0520 03:44:40.386887    5028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 03:44:40.399746    5028 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 03:44:40.400597    5028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:44:40.413371    5028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 03:44:40.432646    5028 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 03:44:40.432646    5028 start.go:494] detecting cgroup driver to use...
	I0520 03:44:40.432646    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:44:40.468123    5028 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 03:44:40.481558    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 03:44:40.519045    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:44:40.540127    5028 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:44:40.553355    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:44:40.585498    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:44:40.621082    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:44:40.668493    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:44:40.702424    5028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:44:40.743057    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:44:40.780548    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:44:40.817040    5028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:44:40.864884    5028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:44:40.885537    5028 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 03:44:40.902538    5028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:44:40.935536    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:44:41.221129    5028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:44:41.253037    5028 start.go:494] detecting cgroup driver to use...
	I0520 03:44:41.266375    5028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:44:41.292175    5028 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 03:44:41.292272    5028 command_runner.go:130] > [Unit]
	I0520 03:44:41.292272    5028 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 03:44:41.292272    5028 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 03:44:41.292272    5028 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 03:44:41.292272    5028 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 03:44:41.292272    5028 command_runner.go:130] > StartLimitBurst=3
	I0520 03:44:41.292272    5028 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 03:44:41.292272    5028 command_runner.go:130] > [Service]
	I0520 03:44:41.292272    5028 command_runner.go:130] > Type=notify
	I0520 03:44:41.292272    5028 command_runner.go:130] > Restart=on-failure
	I0520 03:44:41.292272    5028 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 03:44:41.292272    5028 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 03:44:41.292272    5028 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 03:44:41.292272    5028 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 03:44:41.292272    5028 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 03:44:41.292272    5028 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 03:44:41.292272    5028 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 03:44:41.292272    5028 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 03:44:41.292272    5028 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 03:44:41.292272    5028 command_runner.go:130] > ExecStart=
	I0520 03:44:41.292272    5028 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 03:44:41.292272    5028 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 03:44:41.292272    5028 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 03:44:41.292272    5028 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 03:44:41.292272    5028 command_runner.go:130] > LimitNOFILE=infinity
	I0520 03:44:41.292272    5028 command_runner.go:130] > LimitNPROC=infinity
	I0520 03:44:41.292272    5028 command_runner.go:130] > LimitCORE=infinity
	I0520 03:44:41.292272    5028 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 03:44:41.292272    5028 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 03:44:41.292272    5028 command_runner.go:130] > TasksMax=infinity
	I0520 03:44:41.292272    5028 command_runner.go:130] > TimeoutStartSec=0
	I0520 03:44:41.292272    5028 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 03:44:41.292272    5028 command_runner.go:130] > Delegate=yes
	I0520 03:44:41.292272    5028 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 03:44:41.292272    5028 command_runner.go:130] > KillMode=process
	I0520 03:44:41.292272    5028 command_runner.go:130] > [Install]
	I0520 03:44:41.292272    5028 command_runner.go:130] > WantedBy=multi-user.target
	I0520 03:44:41.307006    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:44:41.345131    5028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:44:41.401999    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:44:41.443338    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:44:41.470328    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:44:41.512032    5028 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 03:44:41.525632    5028 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:44:41.530808    5028 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 03:44:41.544600    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:44:41.563585    5028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:44:41.613444    5028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:44:41.914598    5028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:44:42.190177    5028 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:44:42.190177    5028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:44:42.235567    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:44:42.484143    5028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:44:55.447499    5028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9633418s)
	I0520 03:44:55.461319    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:44:55.499407    5028 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 03:44:55.557321    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:44:55.593740    5028 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:44:55.808625    5028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:44:56.023116    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:44:56.219746    5028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:44:56.261016    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:44:56.299750    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:44:56.498095    5028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:44:56.627047    5028 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:44:56.641150    5028 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:44:56.650564    5028 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 03:44:56.650636    5028 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 03:44:56.650636    5028 command_runner.go:130] > Device: 0,22	Inode: 1498        Links: 1
	I0520 03:44:56.650636    5028 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 03:44:56.650636    5028 command_runner.go:130] > Access: 2024-05-20 10:44:56.543230607 +0000
	I0520 03:44:56.650636    5028 command_runner.go:130] > Modify: 2024-05-20 10:44:56.519223686 +0000
	I0520 03:44:56.650636    5028 command_runner.go:130] > Change: 2024-05-20 10:44:56.522224551 +0000
	I0520 03:44:56.650740    5028 command_runner.go:130] >  Birth: -
	I0520 03:44:56.650788    5028 start.go:562] Will wait 60s for crictl version
	I0520 03:44:56.664450    5028 ssh_runner.go:195] Run: which crictl
	I0520 03:44:56.670886    5028 command_runner.go:130] > /usr/bin/crictl
	I0520 03:44:56.682825    5028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:44:56.733835    5028 command_runner.go:130] > Version:  0.1.0
	I0520 03:44:56.733938    5028 command_runner.go:130] > RuntimeName:  docker
	I0520 03:44:56.733938    5028 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 03:44:56.733938    5028 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 03:44:56.734004    5028 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 03:44:56.744281    5028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:44:56.777956    5028 command_runner.go:130] > 26.0.2
	I0520 03:44:56.788879    5028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:44:56.816638    5028 command_runner.go:130] > 26.0.2
	I0520 03:44:56.822981    5028 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 03:44:56.823237    5028 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 03:44:56.827218    5028 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 03:44:56.827218    5028 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 03:44:56.827218    5028 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 03:44:56.827218    5028 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 03:44:56.829809    5028 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 03:44:56.829809    5028 ip.go:210] interface addr: 172.25.240.1/20
	I0520 03:44:56.837493    5028 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 03:44:56.854695    5028 command_runner.go:130] > 172.25.240.1	host.minikube.internal
	I0520 03:44:56.855169    5028 kubeadm.go:877] updating cluster {Name:functional-379700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-379700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.247.13 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 03:44:56.855614    5028 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:56.865459    5028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:44:56.892111    5028 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 03:44:56.892889    5028 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 03:44:56.892889    5028 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 03:44:56.892924    5028 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 03:44:56.892924    5028 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 03:44:56.892924    5028 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 03:44:56.892973    5028 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 03:44:56.892973    5028 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:44:56.893025    5028 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:44:56.893108    5028 docker.go:615] Images already preloaded, skipping extraction
	I0520 03:44:56.903907    5028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:44:56.926558    5028 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 03:44:56.927459    5028 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 03:44:56.927459    5028 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:44:56.927568    5028 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:44:56.927568    5028 cache_images.go:84] Images are preloaded, skipping loading
	I0520 03:44:56.927683    5028 kubeadm.go:928] updating node { 172.25.247.13 8441 v1.30.1 docker true true} ...
	I0520 03:44:56.927875    5028 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-379700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.247.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-379700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 03:44:56.937437    5028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 03:44:56.973719    5028 command_runner.go:130] > cgroupfs
	I0520 03:44:56.974022    5028 cni.go:84] Creating CNI manager for ""
	I0520 03:44:56.974080    5028 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:56.974132    5028 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 03:44:56.974132    5028 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.247.13 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-379700 NodeName:functional-379700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.247.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.247.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 03:44:56.974132    5028 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.247.13
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-379700"
	  kubeletExtraArgs:
	    node-ip: 172.25.247.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.247.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 03:44:56.988554    5028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 03:44:57.006531    5028 command_runner.go:130] > kubeadm
	I0520 03:44:57.006531    5028 command_runner.go:130] > kubectl
	I0520 03:44:57.006531    5028 command_runner.go:130] > kubelet
	I0520 03:44:57.006531    5028 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 03:44:57.020945    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 03:44:57.041395    5028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 03:44:57.080872    5028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 03:44:57.114866    5028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 03:44:57.171369    5028 ssh_runner.go:195] Run: grep 172.25.247.13	control-plane.minikube.internal$ /etc/hosts
	I0520 03:44:57.178434    5028 command_runner.go:130] > 172.25.247.13	control-plane.minikube.internal
	I0520 03:44:57.191945    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:44:57.410235    5028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:44:57.434944    5028 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700 for IP: 172.25.247.13
	I0520 03:44:57.434944    5028 certs.go:194] generating shared ca certs ...
	I0520 03:44:57.434944    5028 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:44:57.436045    5028 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 03:44:57.436681    5028 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 03:44:57.436681    5028 certs.go:256] generating profile certs ...
	I0520 03:44:57.437299    5028 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.key
	I0520 03:44:57.437860    5028 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\apiserver.key.7e1c64cb
	I0520 03:44:57.438120    5028 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\proxy-client.key
	I0520 03:44:57.438120    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 03:44:57.438120    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 03:44:57.438120    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 03:44:57.438666    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 03:44:57.438843    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 03:44:57.439024    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 03:44:57.439174    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 03:44:57.439464    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 03:44:57.440146    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 03:44:57.440465    5028 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 03:44:57.440632    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 03:44:57.440740    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 03:44:57.440740    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 03:44:57.441265    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 03:44:57.441842    5028 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 03:44:57.442126    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 03:44:57.442262    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 03:44:57.442483    5028 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:44:57.443855    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 03:44:57.494869    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 03:44:57.541245    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 03:44:57.606206    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 03:44:57.651142    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 03:44:57.699057    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 03:44:57.746915    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 03:44:57.794684    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 03:44:57.845729    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 03:44:57.890564    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 03:44:57.936613    5028 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 03:44:57.978993    5028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 03:44:58.021319    5028 ssh_runner.go:195] Run: openssl version
	I0520 03:44:58.031421    5028 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 03:44:58.048497    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 03:44:58.081858    5028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 03:44:58.089766    5028 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 03:44:58.089900    5028 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 03:44:58.102250    5028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 03:44:58.110526    5028 command_runner.go:130] > 51391683
	I0520 03:44:58.123391    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 03:44:58.154902    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 03:44:58.191343    5028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 03:44:58.199432    5028 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 03:44:58.200209    5028 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 03:44:58.213527    5028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 03:44:58.224907    5028 command_runner.go:130] > 3ec20f2e
	I0520 03:44:58.239247    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 03:44:58.278504    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 03:44:58.314075    5028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:44:58.321653    5028 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:44:58.321653    5028 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:44:58.334807    5028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:44:58.349229    5028 command_runner.go:130] > b5213941
	I0520 03:44:58.363990    5028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 03:44:58.394801    5028 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:44:58.401302    5028 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:44:58.401302    5028 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 03:44:58.401302    5028 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0520 03:44:58.401302    5028 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 03:44:58.401302    5028 command_runner.go:130] > Access: 2024-05-20 10:42:09.606483203 +0000
	I0520 03:44:58.401302    5028 command_runner.go:130] > Modify: 2024-05-20 10:42:09.606483203 +0000
	I0520 03:44:58.401302    5028 command_runner.go:130] > Change: 2024-05-20 10:42:09.606483203 +0000
	I0520 03:44:58.401302    5028 command_runner.go:130] >  Birth: 2024-05-20 10:42:09.606483203 +0000
	I0520 03:44:58.412626    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 03:44:58.421272    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.433862    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 03:44:58.441768    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.454243    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 03:44:58.463156    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.477656    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 03:44:58.486236    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.501089    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 03:44:58.509577    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.520891    5028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 03:44:58.532030    5028 command_runner.go:130] > Certificate will not expire
	I0520 03:44:58.532515    5028 kubeadm.go:391] StartCluster: {Name:functional-379700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-379700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.247.13 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:58.541546    5028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:44:58.575868    5028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 03:44:58.594855    5028 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0520 03:44:58.594855    5028 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0520 03:44:58.594855    5028 command_runner.go:130] > /var/lib/minikube/etcd:
	I0520 03:44:58.594855    5028 command_runner.go:130] > member
	W0520 03:44:58.594855    5028 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 03:44:58.594855    5028 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 03:44:58.594855    5028 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 03:44:58.606851    5028 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 03:44:58.624861    5028 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:44:58.625506    5028 kubeconfig.go:125] found "functional-379700" server: "https://172.25.247.13:8441"
	I0520 03:44:58.626913    5028 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:44:58.627773    5028 kapi.go:59] client config for functional-379700: &rest.Config{Host:"https://172.25.247.13:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-379700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-379700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:44:58.629002    5028 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 03:44:58.640928    5028 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 03:44:58.657147    5028 kubeadm.go:624] The running cluster does not require reconfiguration: 172.25.247.13
	I0520 03:44:58.657294    5028 kubeadm.go:1154] stopping kube-system containers ...
	I0520 03:44:58.667848    5028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:44:58.694915    5028 command_runner.go:130] > 77efc1a40094
	I0520 03:44:58.694915    5028 command_runner.go:130] > 5ac570d8f0eb
	I0520 03:44:58.694915    5028 command_runner.go:130] > e46c4e02b788
	I0520 03:44:58.694915    5028 command_runner.go:130] > 4a07d02ae9e3
	I0520 03:44:58.694915    5028 command_runner.go:130] > 3bb9059f94d3
	I0520 03:44:58.694915    5028 command_runner.go:130] > 65d0742b9b31
	I0520 03:44:58.694915    5028 command_runner.go:130] > 335ad06ad6f9
	I0520 03:44:58.694915    5028 command_runner.go:130] > 42fbcbecb658
	I0520 03:44:58.694915    5028 command_runner.go:130] > d69024fc6d5a
	I0520 03:44:58.694915    5028 command_runner.go:130] > 46379da6626c
	I0520 03:44:58.694915    5028 command_runner.go:130] > 1e5ba694a474
	I0520 03:44:58.695974    5028 command_runner.go:130] > 26b9543603d3
	I0520 03:44:58.695974    5028 command_runner.go:130] > 2697ab469a75
	I0520 03:44:58.695974    5028 command_runner.go:130] > 4d6e00574232
	I0520 03:44:58.696181    5028 docker.go:483] Stopping containers: [77efc1a40094 5ac570d8f0eb e46c4e02b788 4a07d02ae9e3 3bb9059f94d3 65d0742b9b31 335ad06ad6f9 42fbcbecb658 d69024fc6d5a 46379da6626c 1e5ba694a474 26b9543603d3 2697ab469a75 4d6e00574232]
	I0520 03:44:58.708045    5028 ssh_runner.go:195] Run: docker stop 77efc1a40094 5ac570d8f0eb e46c4e02b788 4a07d02ae9e3 3bb9059f94d3 65d0742b9b31 335ad06ad6f9 42fbcbecb658 d69024fc6d5a 46379da6626c 1e5ba694a474 26b9543603d3 2697ab469a75 4d6e00574232
	I0520 03:44:58.736107    5028 command_runner.go:130] > 77efc1a40094
	I0520 03:44:58.736107    5028 command_runner.go:130] > 5ac570d8f0eb
	I0520 03:44:58.736176    5028 command_runner.go:130] > e46c4e02b788
	I0520 03:44:58.736176    5028 command_runner.go:130] > 4a07d02ae9e3
	I0520 03:44:58.736176    5028 command_runner.go:130] > 3bb9059f94d3
	I0520 03:44:58.736176    5028 command_runner.go:130] > 65d0742b9b31
	I0520 03:44:58.736176    5028 command_runner.go:130] > 335ad06ad6f9
	I0520 03:44:58.736176    5028 command_runner.go:130] > 42fbcbecb658
	I0520 03:44:58.736176    5028 command_runner.go:130] > d69024fc6d5a
	I0520 03:44:58.736252    5028 command_runner.go:130] > 46379da6626c
	I0520 03:44:58.736252    5028 command_runner.go:130] > 1e5ba694a474
	I0520 03:44:58.736252    5028 command_runner.go:130] > 26b9543603d3
	I0520 03:44:58.736252    5028 command_runner.go:130] > 2697ab469a75
	I0520 03:44:58.736252    5028 command_runner.go:130] > 4d6e00574232
	I0520 03:44:58.748561    5028 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 03:44:58.819585    5028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:44:58.836907    5028 command_runner.go:130] > -rw------- 1 root root 5647 May 20 10:42 /etc/kubernetes/admin.conf
	I0520 03:44:58.837095    5028 command_runner.go:130] > -rw------- 1 root root 5657 May 20 10:42 /etc/kubernetes/controller-manager.conf
	I0520 03:44:58.837153    5028 command_runner.go:130] > -rw------- 1 root root 2007 May 20 10:42 /etc/kubernetes/kubelet.conf
	I0520 03:44:58.837153    5028 command_runner.go:130] > -rw------- 1 root root 5605 May 20 10:42 /etc/kubernetes/scheduler.conf
	I0520 03:44:58.837237    5028 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May 20 10:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 May 20 10:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May 20 10:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 May 20 10:42 /etc/kubernetes/scheduler.conf
	
	I0520 03:44:58.849963    5028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0520 03:44:58.866125    5028 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0520 03:44:58.878048    5028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0520 03:44:58.896391    5028 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0520 03:44:58.909238    5028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0520 03:44:58.937461    5028 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:44:58.951357    5028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:44:58.986848    5028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0520 03:44:59.003857    5028 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:44:59.016962    5028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:44:59.058878    5028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:44:59.076433    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 03:44:59.158277    5028 command_runner.go:130] > [certs] Using the existing "sa" key
	I0520 03:44:59.158277    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:45:00.739337    5028 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:45:00.739756    5028 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0520 03:45:00.739816    5028 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0520 03:45:00.739816    5028 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0520 03:45:00.739816    5028 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:45:00.739816    5028 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:45:00.739910    5028 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.581538s)
	I0520 03:45:00.739910    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:45:01.078747    5028 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:45:01.078747    5028 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:45:01.078747    5028 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 03:45:01.078747    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:45:01.195037    5028 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:45:01.195037    5028 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:45:01.195145    5028 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:45:01.195145    5028 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:45:01.195145    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:45:01.352062    5028 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:45:01.352162    5028 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:45:01.366618    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:45:01.868088    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:45:02.376845    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:45:02.866465    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:45:02.918254    5028 command_runner.go:130] > 5046
	I0520 03:45:02.918459    5028 api_server.go:72] duration metric: took 1.566366s to wait for apiserver process to appear ...
	I0520 03:45:02.918459    5028 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:45:02.918531    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:06.298635    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 03:45:06.298843    5028 api_server.go:103] status: https://172.25.247.13:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 03:45:06.298843    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:06.328275    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 03:45:06.328646    5028 api_server.go:103] status: https://172.25.247.13:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 03:45:06.424018    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:06.432061    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 03:45:06.432061    5028 api_server.go:103] status: https://172.25.247.13:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 03:45:06.924928    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:06.933716    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 03:45:06.933716    5028 api_server.go:103] status: https://172.25.247.13:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 03:45:07.425601    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:07.434101    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 03:45:07.434101    5028 api_server.go:103] status: https://172.25.247.13:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 03:45:07.932525    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:07.942121    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 200:
	ok
	I0520 03:45:07.942652    5028 round_trippers.go:463] GET https://172.25.247.13:8441/version
	I0520 03:45:07.942652    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:07.942652    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:07.942652    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:07.955227    5028 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 03:45:07.955227    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:07.955227    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:07.955227    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:07.955227    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:07.955227    5028 round_trippers.go:580]     Content-Length: 263
	I0520 03:45:07.955227    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:07 GMT
	I0520 03:45:07.955227    5028 round_trippers.go:580]     Audit-Id: 20a5cad1-24c5-4dba-ad83-2f2156f0c982
	I0520 03:45:07.955338    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:07.955464    5028 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 03:45:07.955648    5028 api_server.go:141] control plane version: v1.30.1
	I0520 03:45:07.955648    5028 api_server.go:131] duration metric: took 5.037183s to wait for apiserver health ...
	I0520 03:45:07.955648    5028 cni.go:84] Creating CNI manager for ""
	I0520 03:45:07.955648    5028 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:45:07.958542    5028 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:45:07.972767    5028 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:45:07.998233    5028 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:45:08.047304    5028 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 03:45:08.047784    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:08.047906    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.047906    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.047961    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.057883    5028 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 03:45:08.057883    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.057883    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.057883    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.057951    5028 round_trippers.go:580]     Audit-Id: 72b4f0b2-a4f1-4ef5-ae5f-d49437537eda
	I0520 03:45:08.057951    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.057951    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.057951    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.058690    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"514","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0520 03:45:08.064322    5028 system_pods.go:59] 7 kube-system pods found
	I0520 03:45:08.064322    5028 system_pods.go:61] "coredns-7db6d8ff4d-gn54n" [fcc9490b-312e-48f0-b0aa-687e3c005f39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 03:45:08.064445    5028 system_pods.go:61] "etcd-functional-379700" [4cb21cbd-451f-44d5-9751-4ca6757c73fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 03:45:08.064445    5028 system_pods.go:61] "kube-apiserver-functional-379700" [0929729a-d7cf-423b-aa56-eda92cd65ca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 03:45:08.064445    5028 system_pods.go:61] "kube-controller-manager-functional-379700" [4bfc2761-c3ce-435c-a97c-c45e3bb52387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 03:45:08.064445    5028 system_pods.go:61] "kube-proxy-dsfcm" [ef35b0df-375a-4dd9-8677-c61b5cb5691b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 03:45:08.064445    5028 system_pods.go:61] "kube-scheduler-functional-379700" [34876aed-79ab-4b82-afc1-d05cd777c4b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 03:45:08.064445    5028 system_pods.go:61] "storage-provisioner" [6379f4b6-01e2-443a-8b28-13183f7119e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 03:45:08.064445    5028 system_pods.go:74] duration metric: took 17.141ms to wait for pod list to return data ...
	I0520 03:45:08.064445    5028 node_conditions.go:102] verifying NodePressure condition ...
	I0520 03:45:08.064702    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes
	I0520 03:45:08.064730    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.064730    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.064730    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.068299    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:08.068443    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.068443    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.068443    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.068443    5028 round_trippers.go:580]     Audit-Id: 41088e6f-6b30-4edf-9eb3-e249425cf3a3
	I0520 03:45:08.068443    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.068443    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.068443    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.068720    5028 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0520 03:45:08.069397    5028 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 03:45:08.069457    5028 node_conditions.go:123] node cpu capacity is 2
	I0520 03:45:08.069515    5028 node_conditions.go:105] duration metric: took 5.0699ms to run NodePressure ...
	I0520 03:45:08.069515    5028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:45:08.615409    5028 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 03:45:08.615409    5028 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 03:45:08.615409    5028 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 03:45:08.615409    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0520 03:45:08.615409    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.615409    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.615409    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.630684    5028 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0520 03:45:08.630684    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.631346    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.631346    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.631346    5028 round_trippers.go:580]     Audit-Id: 1668e0b1-22fb-49a3-858a-449e6f5519e3
	I0520 03:45:08.631346    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.631346    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.631346    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.633566    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30957 chars]
	I0520 03:45:08.635399    5028 kubeadm.go:733] kubelet initialised
	I0520 03:45:08.635399    5028 kubeadm.go:734] duration metric: took 19.9894ms waiting for restarted kubelet to initialise ...
	I0520 03:45:08.635399    5028 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:45:08.635399    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:08.635399    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.635399    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.635399    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.647623    5028 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 03:45:08.647623    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.647623    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.647623    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.647623    5028 round_trippers.go:580]     Audit-Id: fc63c7bc-cc2f-4a6b-8934-4efe4e2695db
	I0520 03:45:08.647623    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.647623    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.647623    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.652770    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"514","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0520 03:45:08.655132    5028 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:08.655132    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:08.655132    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.655132    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.655132    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.672943    5028 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 03:45:08.672943    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.672943    5028 round_trippers.go:580]     Audit-Id: cc77d611-a66e-4ae3-bef4-c6c9131276eb
	I0520 03:45:08.672943    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.672943    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.672943    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.672943    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.672943    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.673594    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"514","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0520 03:45:08.674381    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:08.674381    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:08.674381    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:08.674381    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:08.680155    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:08.680155    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:08.680155    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:08.680155    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:08.680155    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:08 GMT
	I0520 03:45:08.680155    5028 round_trippers.go:580]     Audit-Id: 8c2eb6fe-3566-436a-a107-93d574f889d8
	I0520 03:45:08.680155    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:08.680155    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:08.680155    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:09.165742    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:09.165742    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:09.165742    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:09.165742    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:09.170417    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:09.171217    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:09.171217    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:09.171217    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:09.171217    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:09.171217    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:09.171217    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:09 GMT
	I0520 03:45:09.171217    5028 round_trippers.go:580]     Audit-Id: ec66fe04-a088-454e-b776-0a8dfc78da65
	I0520 03:45:09.172014    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"514","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0520 03:45:09.172763    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:09.172763    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:09.172763    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:09.172763    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:09.174954    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:09.175721    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:09.175721    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:09.175721    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:09.175721    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:09 GMT
	I0520 03:45:09.175721    5028 round_trippers.go:580]     Audit-Id: f0bf1c8b-2a23-4320-860b-a513230c9ff9
	I0520 03:45:09.175721    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:09.175721    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:09.175721    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:09.666837    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:09.666837    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:09.666837    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:09.666837    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:09.671091    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:09.671175    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:09.671175    5028 round_trippers.go:580]     Audit-Id: 1994048f-7906-4f41-94b4-d146499eded8
	I0520 03:45:09.671175    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:09.671175    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:09.671264    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:09.671264    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:09.671264    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:09 GMT
	I0520 03:45:09.671379    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"514","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0520 03:45:09.672295    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:09.672295    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:09.672295    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:09.672295    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:09.680670    5028 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 03:45:09.680670    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:09.680670    5028 round_trippers.go:580]     Audit-Id: bfc4cd01-cab9-457b-8642-d2a97c102484
	I0520 03:45:09.680670    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:09.680670    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:09.680670    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:09.680670    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:09.680670    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:09 GMT
	I0520 03:45:09.680670    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:10.171162    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:10.171383    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:10.171383    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:10.171446    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:10.176237    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:10.176237    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:10.176237    5028 round_trippers.go:580]     Audit-Id: 22741740-6f80-4651-ad86-338b6a906aa9
	I0520 03:45:10.176237    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:10.176237    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:10.176237    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:10.176237    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:10.176237    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:10 GMT
	I0520 03:45:10.176808    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:10.177287    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:10.177287    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:10.177287    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:10.177287    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:10.182226    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:10.182226    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:10.182226    5028 round_trippers.go:580]     Audit-Id: f3910bf2-6fa0-4207-bb9d-0e4a6ac0d2eb
	I0520 03:45:10.182226    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:10.182801    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:10.182801    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:10.182801    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:10.182801    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:10 GMT
	I0520 03:45:10.183040    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:10.655520    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:10.655520    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:10.655520    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:10.655520    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:10.660421    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:10.661302    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:10.661302    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:10.661302    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:10 GMT
	I0520 03:45:10.661302    5028 round_trippers.go:580]     Audit-Id: c5553586-0f31-4975-abd9-41edbbaad96e
	I0520 03:45:10.661386    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:10.661386    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:10.661386    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:10.661660    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:10.662358    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:10.662388    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:10.662388    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:10.662434    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:10.664644    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:10.664644    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:10.664644    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:10.664644    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:10.664644    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:10 GMT
	I0520 03:45:10.664644    5028 round_trippers.go:580]     Audit-Id: f2bd45bb-789a-439e-89bb-29f602e965fb
	I0520 03:45:10.664644    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:10.664644    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:10.665430    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:10.665958    5028 pod_ready.go:102] pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace has status "Ready":"False"
	I0520 03:45:11.160618    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:11.161004    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:11.161004    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:11.161004    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:11.164959    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:11.165111    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:11.165111    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:11.165111    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:11 GMT
	I0520 03:45:11.165111    5028 round_trippers.go:580]     Audit-Id: a6c02e12-b20b-4420-8d14-6513523ade03
	I0520 03:45:11.165111    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:11.165111    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:11.165187    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:11.165567    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:11.166221    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:11.166221    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:11.166221    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:11.166424    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:11.168552    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:11.168552    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:11.168552    5028 round_trippers.go:580]     Audit-Id: 107466eb-43e7-4a72-9baa-4b4abf90f70f
	I0520 03:45:11.168552    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:11.168552    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:11.168552    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:11.168552    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:11.168552    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:11 GMT
	I0520 03:45:11.169618    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:11.668745    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:11.668745    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:11.668745    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:11.668745    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:11.673114    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:11.673114    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:11.673114    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:11.673114    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:11.673114    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:11.673114    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:11 GMT
	I0520 03:45:11.673114    5028 round_trippers.go:580]     Audit-Id: 77b8e846-316a-444e-b39b-4c32cdb124bb
	I0520 03:45:11.673114    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:11.673114    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:11.674158    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:11.674158    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:11.674158    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:11.674158    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:11.677407    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:11.677595    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:11.677595    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:11 GMT
	I0520 03:45:11.677595    5028 round_trippers.go:580]     Audit-Id: 1dad14fc-657b-4220-92a4-0c0d9896adf4
	I0520 03:45:11.677595    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:11.677595    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:11.677595    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:11.677595    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:11.677818    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:12.166890    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:12.166890    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:12.166890    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:12.166890    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:12.171460    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:12.171956    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:12.171956    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:12.171956    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:12.171956    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:12.171956    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:12.172040    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:12 GMT
	I0520 03:45:12.172040    5028 round_trippers.go:580]     Audit-Id: 07ed47ac-20cf-45fd-8716-671ce3b0037e
	I0520 03:45:12.172040    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:12.173041    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:12.173161    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:12.173161    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:12.173161    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:12.177061    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:12.177257    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:12.177257    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:12.177257    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:12.177257    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:12 GMT
	I0520 03:45:12.177257    5028 round_trippers.go:580]     Audit-Id: d2fae366-c97a-4ab8-afe9-67385b90f581
	I0520 03:45:12.177257    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:12.177363    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:12.177893    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:12.666185    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:12.666363    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:12.666363    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:12.666363    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:12.670154    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:12.670948    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:12.670948    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:12.670948    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:12.670948    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:12.670948    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:12 GMT
	I0520 03:45:12.670948    5028 round_trippers.go:580]     Audit-Id: 7001c277-d2ab-4872-bff1-88cc871251c9
	I0520 03:45:12.670948    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:12.671262    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:12.671452    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:12.671452    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:12.671452    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:12.671452    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:12.674174    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:12.674475    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:12.674475    5028 round_trippers.go:580]     Audit-Id: 6d34dd00-18ca-4f1e-97cf-7ff25cf47c97
	I0520 03:45:12.674475    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:12.674475    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:12.674475    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:12.674475    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:12.674475    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:12 GMT
	I0520 03:45:12.674475    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:12.675237    5028 pod_ready.go:102] pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace has status "Ready":"False"
	I0520 03:45:13.165268    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:13.165413    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:13.165413    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:13.165413    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:13.170504    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:13.170504    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:13.170504    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:13.170884    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:13.170884    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:13.170884    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:13 GMT
	I0520 03:45:13.170884    5028 round_trippers.go:580]     Audit-Id: 1b525c6e-18e6-4453-b942-be6e46fb5a06
	I0520 03:45:13.170884    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:13.171054    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:13.171980    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:13.172041    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:13.172041    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:13.172041    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:13.174411    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:13.175053    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:13.175053    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:13.175053    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:13.175053    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:13.175053    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:13 GMT
	I0520 03:45:13.175053    5028 round_trippers.go:580]     Audit-Id: bbcdb41b-2244-450b-bd60-e2a159d277bd
	I0520 03:45:13.175053    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:13.175420    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:13.666917    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:13.666917    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:13.666976    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:13.666976    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:13.669826    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:13.669826    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:13.669826    5028 round_trippers.go:580]     Audit-Id: 41274a3e-d550-456a-9797-a41a5332652c
	I0520 03:45:13.669826    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:13.669826    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:13.669826    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:13.669826    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:13.669826    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:13 GMT
	I0520 03:45:13.672929    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:13.673908    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:13.673908    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:13.673908    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:13.674443    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:13.677836    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:13.677836    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:13.677836    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:13.677836    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:13.677836    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:13.677994    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:13.677994    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:13 GMT
	I0520 03:45:13.677994    5028 round_trippers.go:580]     Audit-Id: b7e5a401-3129-4001-9896-d0bd9ae49e9a
	I0520 03:45:13.678401    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:14.168408    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:14.168597    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:14.168597    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:14.168597    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:14.172006    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:14.172006    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:14.172006    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:14.172006    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:14.172929    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:14.172929    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:14.172929    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:14 GMT
	I0520 03:45:14.172929    5028 round_trippers.go:580]     Audit-Id: c8b7261f-89b3-4596-b258-c966ca534889
	I0520 03:45:14.173562    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:14.173957    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:14.173957    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:14.173957    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:14.173957    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:14.177096    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:14.177096    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:14.177096    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:14 GMT
	I0520 03:45:14.177096    5028 round_trippers.go:580]     Audit-Id: f2b61697-6598-474e-951d-2001d1d99588
	I0520 03:45:14.177096    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:14.177096    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:14.177096    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:14.177096    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:14.177096    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:14.669359    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:14.669359    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:14.669359    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:14.669359    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:14.676080    5028 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 03:45:14.676080    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:14.676080    5028 round_trippers.go:580]     Audit-Id: 0c3ada02-40c8-446e-8724-a183147ff52b
	I0520 03:45:14.676080    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:14.676080    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:14.676080    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:14.676080    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:14.676080    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:14 GMT
	I0520 03:45:14.676080    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:14.676722    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:14.676722    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:14.676722    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:14.676722    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:14.680042    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:14.680042    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:14.680042    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:14.680042    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:14.680042    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:14.680042    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:14.680042    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:14 GMT
	I0520 03:45:14.680042    5028 round_trippers.go:580]     Audit-Id: 4e657227-70df-4b00-8074-44b97aaeea95
	I0520 03:45:14.680042    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:14.681090    5028 pod_ready.go:102] pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace has status "Ready":"False"
	I0520 03:45:15.168293    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:15.168293    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.168293    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.168293    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.171869    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:15.172222    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.172222    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.172292    5028 round_trippers.go:580]     Audit-Id: 8f79b94c-b85b-4932-abb8-3f696ac2bc36
	I0520 03:45:15.172292    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.172292    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.172292    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.172292    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.172491    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"520","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0520 03:45:15.173167    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:15.173167    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.173167    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.173167    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.175743    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:15.175743    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.176609    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.176609    5028 round_trippers.go:580]     Audit-Id: 6a1a1355-608c-4c71-ae59-28ab06f6f7f4
	I0520 03:45:15.176609    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.176609    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.176609    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.176609    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.176688    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:15.667308    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:15.667395    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.667395    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.667395    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.671091    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:15.671747    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.671747    5028 round_trippers.go:580]     Audit-Id: a56ae5e7-640d-4a56-9847-c3aa1fa85cd1
	I0520 03:45:15.671747    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.671747    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.671747    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.671747    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.671747    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.672101    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"580","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0520 03:45:15.672592    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:15.672592    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.672592    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.672592    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.675158    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:15.675158    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.675158    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.676183    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.676183    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.676183    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.676231    5028 round_trippers.go:580]     Audit-Id: fb7c4e3e-c861-42c7-b38c-bab002eedb5a
	I0520 03:45:15.676231    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.676524    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:15.676524    5028 pod_ready.go:92] pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:15.676524    5028 pod_ready.go:81] duration metric: took 7.0213837s for pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:15.677056    5028 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:15.677113    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:15.677264    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.677264    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.677264    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.679158    5028 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 03:45:15.680034    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.680034    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.680034    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.680034    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.680076    5028 round_trippers.go:580]     Audit-Id: 3a7b46ee-40ed-4210-8328-8816b3b24740
	I0520 03:45:15.680076    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.680076    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.680260    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0520 03:45:15.680476    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:15.680476    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:15.680476    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:15.680476    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:15.682212    5028 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 03:45:15.683026    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:15.683026    5028 round_trippers.go:580]     Audit-Id: 883cc168-12d7-4e8e-952a-93ea79196880
	I0520 03:45:15.683026    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:15.683100    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:15.683124    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:15.683124    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:15.683124    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:15 GMT
	I0520 03:45:15.683386    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:16.182339    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:16.182339    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:16.182339    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:16.182339    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:16.187132    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:16.187132    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:16.187132    5028 round_trippers.go:580]     Audit-Id: 988b6be3-f03a-4095-a0e5-4299780a0632
	I0520 03:45:16.187132    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:16.187132    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:16.187132    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:16.187132    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:16.187132    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:16 GMT
	I0520 03:45:16.187376    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0520 03:45:16.187946    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:16.188087    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:16.188087    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:16.188087    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:16.190367    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:16.191316    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:16.191316    5028 round_trippers.go:580]     Audit-Id: 5321f12b-41e5-4ca5-9392-323c052159aa
	I0520 03:45:16.191316    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:16.191316    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:16.191316    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:16.191316    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:16.191316    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:16 GMT
	I0520 03:45:16.191463    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:16.684244    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:16.684244    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:16.684244    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:16.684244    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:16.688913    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:16.688995    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:16.688995    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:16.688995    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:16 GMT
	I0520 03:45:16.688995    5028 round_trippers.go:580]     Audit-Id: 7f8b8cde-b594-42eb-9541-e2d9aee09946
	I0520 03:45:16.688995    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:16.688995    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:16.688995    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:16.689232    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0520 03:45:16.689923    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:16.689923    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:16.690074    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:16.690074    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:16.695525    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:16.695525    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:16.695525    5028 round_trippers.go:580]     Audit-Id: 94e6bd06-c15f-4a60-865b-a8f0cbbb8272
	I0520 03:45:16.695525    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:16.695525    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:16.695525    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:16.695525    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:16.695525    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:16 GMT
	I0520 03:45:16.695525    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:17.181479    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:17.181669    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:17.181669    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:17.181669    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:17.186513    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:17.186597    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:17.186597    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:17.186597    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:17.186597    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:17.186660    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:17 GMT
	I0520 03:45:17.186660    5028 round_trippers.go:580]     Audit-Id: b05f968d-6550-4829-b460-90bf13353538
	I0520 03:45:17.186660    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:17.186832    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0520 03:45:17.187635    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:17.187635    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:17.187635    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:17.187715    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:17.190009    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:17.190009    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:17.190009    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:17 GMT
	I0520 03:45:17.190009    5028 round_trippers.go:580]     Audit-Id: 4e61d08c-72c9-4463-9bcc-03ff4ba9b25a
	I0520 03:45:17.190009    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:17.191017    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:17.191017    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:17.191017    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:17.191259    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:17.685260    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:17.685609    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:17.685814    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:17.685814    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:17.691207    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:17.691207    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:17.691603    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:17.691603    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:17 GMT
	I0520 03:45:17.691603    5028 round_trippers.go:580]     Audit-Id: 0898abc8-9784-4809-b901-aeb09bd00e9c
	I0520 03:45:17.691603    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:17.691603    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:17.691603    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:17.691836    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"509","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0520 03:45:17.692637    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:17.692704    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:17.692704    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:17.692704    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:17.695892    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:17.695892    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:17.695892    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:17.695892    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:17 GMT
	I0520 03:45:17.695892    5028 round_trippers.go:580]     Audit-Id: 834d88f9-7424-4b3d-9baf-0e12906c24a3
	I0520 03:45:17.695892    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:17.696066    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:17.696066    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:17.696784    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:17.696784    5028 pod_ready.go:102] pod "etcd-functional-379700" in "kube-system" namespace has status "Ready":"False"
	I0520 03:45:18.184714    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:18.184804    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.184870    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.184870    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.189118    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:18.189118    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.189118    5028 round_trippers.go:580]     Audit-Id: ca617749-d9b5-4b1d-963b-c3e362933084
	I0520 03:45:18.189118    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.189118    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.189118    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.189118    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.189118    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.189719    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"585","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0520 03:45:18.190343    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.190343    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.190343    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.190343    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.192907    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.193622    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.193622    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.193622    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.193622    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.193622    5028 round_trippers.go:580]     Audit-Id: 677d3370-5767-4ec3-b695-ad25d4ba20e4
	I0520 03:45:18.193622    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.193622    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.193833    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:18.193833    5028 pod_ready.go:92] pod "etcd-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:18.193833    5028 pod_ready.go:81] duration metric: took 2.5167739s for pod "etcd-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.193833    5028 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.194566    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-379700
	I0520 03:45:18.194566    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.194566    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.194566    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.196913    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.196913    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.196913    5028 round_trippers.go:580]     Audit-Id: 2b340fb9-e7cc-4237-bf3d-260b0f3432e4
	I0520 03:45:18.196913    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.196913    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.196913    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.196913    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.196913    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.198218    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-379700","namespace":"kube-system","uid":"0929729a-d7cf-423b-aa56-eda92cd65ca9","resourceVersion":"579","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.247.13:8441","kubernetes.io/config.hash":"8945aa8d8b9e2ca441fc04568a50a45a","kubernetes.io/config.mirror":"8945aa8d8b9e2ca441fc04568a50a45a","kubernetes.io/config.seen":"2024-05-20T10:42:23.333251572Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0520 03:45:18.198385    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.198385    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.198385    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.198385    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.200960    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.200960    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.200960    5028 round_trippers.go:580]     Audit-Id: 273d3d3f-ffa9-425d-8881-e87d11ae92e9
	I0520 03:45:18.200960    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.200960    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.200960    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.200960    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.200960    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.202104    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:18.202245    5028 pod_ready.go:92] pod "kube-apiserver-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:18.202245    5028 pod_ready.go:81] duration metric: took 8.412ms for pod "kube-apiserver-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.202245    5028 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.202245    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-379700
	I0520 03:45:18.202245    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.202245    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.202245    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.205011    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.205011    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.205011    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.205011    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.205011    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.205011    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.205011    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.205011    5028 round_trippers.go:580]     Audit-Id: e0860c9f-1124-41b3-b7bb-25183810492e
	I0520 03:45:18.205011    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-379700","namespace":"kube-system","uid":"4bfc2761-c3ce-435c-a97c-c45e3bb52387","resourceVersion":"577","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"79b3d0d30e88afb17392da9e30389486","kubernetes.io/config.mirror":"79b3d0d30e88afb17392da9e30389486","kubernetes.io/config.seen":"2024-05-20T10:42:23.333252672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0520 03:45:18.206231    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.206335    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.206335    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.206335    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.208050    5028 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 03:45:18.208050    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.208050    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.208839    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.208839    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.208839    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.208839    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.208839    5028 round_trippers.go:580]     Audit-Id: b55838ba-40d4-496f-8173-045778f87d53
	I0520 03:45:18.208985    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:18.209878    5028 pod_ready.go:92] pod "kube-controller-manager-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:18.209878    5028 pod_ready.go:81] duration metric: took 7.6333ms for pod "kube-controller-manager-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.209878    5028 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dsfcm" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.209878    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-proxy-dsfcm
	I0520 03:45:18.209878    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.209878    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.209878    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.212746    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.212746    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.212746    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.212746    5028 round_trippers.go:580]     Audit-Id: 4ccc3df0-72fc-494b-8aae-a368491b47d9
	I0520 03:45:18.212746    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.213493    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.213493    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.213493    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.213870    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dsfcm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef35b0df-375a-4dd9-8677-c61b5cb5691b","resourceVersion":"522","creationTimestamp":"2024-05-20T10:42:36Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9b3b8b0c-ce7b-42d1-9da9-0a08718eb4c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b3b8b0c-ce7b-42d1-9da9-0a08718eb4c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0520 03:45:18.214016    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.214016    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.214016    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.214016    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.216608    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.216608    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.216608    5028 round_trippers.go:580]     Audit-Id: 8f403662-d939-4a57-8a7a-535bc628d97b
	I0520 03:45:18.217037    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.217037    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.217037    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.217037    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.217037    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.217399    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:18.217800    5028 pod_ready.go:92] pod "kube-proxy-dsfcm" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:18.217800    5028 pod_ready.go:81] duration metric: took 7.9224ms for pod "kube-proxy-dsfcm" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.217856    5028 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:18.217893    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-379700
	I0520 03:45:18.218018    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.218018    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.218018    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.222583    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:18.222583    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.222583    5028 round_trippers.go:580]     Audit-Id: 5347d9a4-57ed-4416-a630-f397cfecc123
	I0520 03:45:18.222583    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.222583    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.222583    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.222583    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.222583    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.222583    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-379700","namespace":"kube-system","uid":"34876aed-79ab-4b82-afc1-d05cd777c4b1","resourceVersion":"512","creationTimestamp":"2024-05-20T10:42:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.mirror":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.seen":"2024-05-20T10:42:15.164480141Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5446 chars]
	I0520 03:45:18.223236    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.223236    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.223236    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.223236    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.227988    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:18.228174    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.228174    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.228196    5028 round_trippers.go:580]     Audit-Id: 362bc43c-e88c-4451-bdc8-fcea6fd842fd
	I0520 03:45:18.228196    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.228196    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.228196    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.228196    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.228196    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:18.721199    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-379700
	I0520 03:45:18.721253    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.721253    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.721253    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.723897    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.724793    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.724793    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.724793    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.724793    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.724793    5028 round_trippers.go:580]     Audit-Id: ad781eed-d3f7-4aa1-9bba-d77958f59aff
	I0520 03:45:18.724793    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.724869    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.725146    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-379700","namespace":"kube-system","uid":"34876aed-79ab-4b82-afc1-d05cd777c4b1","resourceVersion":"512","creationTimestamp":"2024-05-20T10:42:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.mirror":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.seen":"2024-05-20T10:42:15.164480141Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5446 chars]
	I0520 03:45:18.725728    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:18.725800    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:18.725800    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:18.725897    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:18.728049    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:18.728049    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:18.728049    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:18.728049    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:18.728049    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:18.728049    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:18 GMT
	I0520 03:45:18.728049    5028 round_trippers.go:580]     Audit-Id: 6b90730a-44e3-4c54-a8b4-ed7c2dd17b86
	I0520 03:45:18.728049    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:18.728807    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:19.222568    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-379700
	I0520 03:45:19.222698    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.222698    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.222698    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.225640    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:19.225666    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.225719    5028 round_trippers.go:580]     Audit-Id: dcdd2687-ff70-481f-ada4-6132b1d5b49c
	I0520 03:45:19.225719    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.225719    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.225719    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.225719    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.225719    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.225995    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-379700","namespace":"kube-system","uid":"34876aed-79ab-4b82-afc1-d05cd777c4b1","resourceVersion":"586","creationTimestamp":"2024-05-20T10:42:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.mirror":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.seen":"2024-05-20T10:42:15.164480141Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0520 03:45:19.226670    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:19.226718    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.226718    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.226718    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.229329    5028 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 03:45:19.229329    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.229329    5028 round_trippers.go:580]     Audit-Id: b1c6854b-632c-49d5-9f7e-8109782fe786
	I0520 03:45:19.229329    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.229705    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.229705    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.229705    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.229705    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.229915    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:19.230298    5028 pod_ready.go:92] pod "kube-scheduler-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:19.230333    5028 pod_ready.go:81] duration metric: took 1.0124754s for pod "kube-scheduler-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:19.230402    5028 pod_ready.go:38] duration metric: took 10.5949914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:45:19.230454    5028 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:45:19.251717    5028 command_runner.go:130] > -16
	I0520 03:45:19.251717    5028 ops.go:34] apiserver oom_adj: -16
	I0520 03:45:19.251717    5028 kubeadm.go:591] duration metric: took 20.6568395s to restartPrimaryControlPlane
	I0520 03:45:19.251717    5028 kubeadm.go:393] duration metric: took 20.7192281s to StartCluster
	I0520 03:45:19.251717    5028 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:45:19.252778    5028 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:45:19.254518    5028 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:45:19.255651    5028 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.247.13 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:45:19.255651    5028 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 03:45:19.255942    5028 addons.go:69] Setting storage-provisioner=true in profile "functional-379700"
	I0520 03:45:19.255942    5028 addons.go:69] Setting default-storageclass=true in profile "functional-379700"
	I0520 03:45:19.256051    5028 addons.go:234] Setting addon storage-provisioner=true in "functional-379700"
	W0520 03:45:19.256084    5028 addons.go:243] addon storage-provisioner should already be in state true
	I0520 03:45:19.261603    5028 out.go:177] * Verifying Kubernetes components...
	I0520 03:45:19.256051    5028 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-379700"
	I0520 03:45:19.256157    5028 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:45:19.256157    5028 host.go:66] Checking if "functional-379700" exists ...
	I0520 03:45:19.262665    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:45:19.265374    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:45:19.279331    5028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:45:19.608981    5028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:45:19.641790    5028 node_ready.go:35] waiting up to 6m0s for node "functional-379700" to be "Ready" ...
	I0520 03:45:19.641973    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:19.641973    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.641973    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.641973    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.646156    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:19.646653    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.646653    5028 round_trippers.go:580]     Audit-Id: 4865112b-e76a-4c07-8c3c-20dead8e57a5
	I0520 03:45:19.646653    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.646653    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.646653    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.646653    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.646653    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.646935    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:19.647564    5028 node_ready.go:49] node "functional-379700" has status "Ready":"True"
	I0520 03:45:19.647564    5028 node_ready.go:38] duration metric: took 5.7741ms for node "functional-379700" to be "Ready" ...
	I0520 03:45:19.647564    5028 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:45:19.647736    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:19.647859    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.647859    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.647859    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.657452    5028 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 03:45:19.657452    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.657452    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.657452    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.657452    5028 round_trippers.go:580]     Audit-Id: eb2fb3e6-9d1c-4daf-b47a-9a53f11f3a9d
	I0520 03:45:19.657452    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.657452    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.657452    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.657452    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"580","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0520 03:45:19.660960    5028 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:19.661055    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gn54n
	I0520 03:45:19.661055    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.661142    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.661142    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.667478    5028 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 03:45:19.667478    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.667478    5028 round_trippers.go:580]     Audit-Id: 936c5e24-5323-451c-996d-379e8ca9319d
	I0520 03:45:19.667478    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.667478    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.667478    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.667478    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.667478    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.667478    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"580","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0520 03:45:19.796859    5028 request.go:629] Waited for 128.5884ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:19.797314    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:19.797314    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.797314    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.797314    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.804840    5028 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 03:45:19.805677    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.805677    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.805677    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.805677    5028 round_trippers.go:580]     Audit-Id: d071047a-632b-4616-84a5-81fa283ae955
	I0520 03:45:19.805677    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.805677    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.805677    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.805677    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:19.806706    5028 pod_ready.go:92] pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:19.806706    5028 pod_ready.go:81] duration metric: took 145.746ms for pod "coredns-7db6d8ff4d-gn54n" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:19.806706    5028 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:19.985360    5028 request.go:629] Waited for 178.6536ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:19.985762    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/etcd-functional-379700
	I0520 03:45:19.985762    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:19.985762    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:19.985854    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:19.991361    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:19.992255    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:19.992255    5028 round_trippers.go:580]     Audit-Id: f111bd8c-15b9-4a66-a315-75b52eb101d0
	I0520 03:45:19.992255    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:19.992255    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:19.992255    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:19.992255    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:19.992255    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:19 GMT
	I0520 03:45:19.992587    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-379700","namespace":"kube-system","uid":"4cb21cbd-451f-44d5-9751-4ca6757c73fb","resourceVersion":"585","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.247.13:2379","kubernetes.io/config.hash":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.mirror":"ea0ddb831f5cf8b0a7b13de1fa37a294","kubernetes.io/config.seen":"2024-05-20T10:42:23.333248472Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0520 03:45:20.191119    5028 request.go:629] Waited for 197.824ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.191557    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.191557    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:20.191647    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:20.191647    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:20.198084    5028 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 03:45:20.198084    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:20.198084    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:20.198084    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:20.198084    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:20.198084    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:20.198084    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:20 GMT
	I0520 03:45:20.198084    5028 round_trippers.go:580]     Audit-Id: 03e2c204-bc2d-4704-8b71-610e1f92d9c1
	I0520 03:45:20.198927    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:20.198927    5028 pod_ready.go:92] pod "etcd-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:20.198927    5028 pod_ready.go:81] duration metric: took 392.2202ms for pod "etcd-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:20.198927    5028 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:20.397324    5028 request.go:629] Waited for 197.8398ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-379700
	I0520 03:45:20.397549    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-379700
	I0520 03:45:20.397638    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:20.397660    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:20.397660    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:20.401006    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:20.401664    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:20.401664    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:20.401664    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:20.401664    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:20.401664    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:20 GMT
	I0520 03:45:20.401664    5028 round_trippers.go:580]     Audit-Id: 24cceff0-f90f-49a4-bc32-d080bb7f97ab
	I0520 03:45:20.401664    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:20.402190    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-379700","namespace":"kube-system","uid":"0929729a-d7cf-423b-aa56-eda92cd65ca9","resourceVersion":"579","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.247.13:8441","kubernetes.io/config.hash":"8945aa8d8b9e2ca441fc04568a50a45a","kubernetes.io/config.mirror":"8945aa8d8b9e2ca441fc04568a50a45a","kubernetes.io/config.seen":"2024-05-20T10:42:23.333251572Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0520 03:45:20.586991    5028 request.go:629] Waited for 184.08ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.587049    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.587049    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:20.587049    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:20.587049    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:20.590645    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:20.591156    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:20.591156    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:20.591229    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:20.591229    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:20 GMT
	I0520 03:45:20.591229    5028 round_trippers.go:580]     Audit-Id: 6eff98d1-affe-4d5c-b392-ec8529a756c0
	I0520 03:45:20.591229    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:20.591229    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:20.591419    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:20.592169    5028 pod_ready.go:92] pod "kube-apiserver-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:20.592169    5028 pod_ready.go:81] duration metric: took 393.2413ms for pod "kube-apiserver-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:20.592169    5028 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:20.793900    5028 request.go:629] Waited for 201.7311ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-379700
	I0520 03:45:20.794163    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-379700
	I0520 03:45:20.794163    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:20.794249    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:20.794249    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:20.797952    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:20.797952    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:20.797952    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:20.797952    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:20.797952    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:20 GMT
	I0520 03:45:20.797952    5028 round_trippers.go:580]     Audit-Id: f96cf0af-1f9d-445e-8876-a0689868fe6c
	I0520 03:45:20.797952    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:20.797952    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:20.800960    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-379700","namespace":"kube-system","uid":"4bfc2761-c3ce-435c-a97c-c45e3bb52387","resourceVersion":"577","creationTimestamp":"2024-05-20T10:42:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"79b3d0d30e88afb17392da9e30389486","kubernetes.io/config.mirror":"79b3d0d30e88afb17392da9e30389486","kubernetes.io/config.seen":"2024-05-20T10:42:23.333252672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0520 03:45:20.999366    5028 request.go:629] Waited for 197.7255ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.999566    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:20.999677    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:20.999677    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:20.999677    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.003044    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:21.003044    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.003044    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.003349    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.003349    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:20 GMT
	I0520 03:45:21.003349    5028 round_trippers.go:580]     Audit-Id: 45e647d7-22f3-4349-967a-969629d16ad4
	I0520 03:45:21.003349    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.003349    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.003477    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:21.003959    5028 pod_ready.go:92] pod "kube-controller-manager-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:21.004094    5028 pod_ready.go:81] duration metric: took 411.9248ms for pod "kube-controller-manager-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:21.004094    5028 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dsfcm" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:21.189839    5028 request.go:629] Waited for 185.4844ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-proxy-dsfcm
	I0520 03:45:21.190072    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-proxy-dsfcm
	I0520 03:45:21.190072    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.190120    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.190120    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.195800    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:21.195974    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.195974    5028 round_trippers.go:580]     Audit-Id: 5ea6faef-2d89-4eac-b3a6-770292dacb16
	I0520 03:45:21.195974    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.195974    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.195974    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.195974    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.195974    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.195974    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dsfcm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef35b0df-375a-4dd9-8677-c61b5cb5691b","resourceVersion":"522","creationTimestamp":"2024-05-20T10:42:36Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9b3b8b0c-ce7b-42d1-9da9-0a08718eb4c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b3b8b0c-ce7b-42d1-9da9-0a08718eb4c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0520 03:45:21.393639    5028 request.go:629] Waited for 196.5771ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:21.393840    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:21.393840    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.393918    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.393918    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.397167    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:21.397167    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.397167    5028 round_trippers.go:580]     Audit-Id: 049d262b-0729-4401-a5aa-79d9a6ec09ac
	I0520 03:45:21.397818    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.397818    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.397818    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.397818    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.397818    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.398066    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:21.398724    5028 pod_ready.go:92] pod "kube-proxy-dsfcm" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:21.398724    5028 pod_ready.go:81] duration metric: took 394.6293ms for pod "kube-proxy-dsfcm" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:21.398724    5028 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:21.594263    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:45:21.594263    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:21.595313    5028 request.go:629] Waited for 195.3413ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-379700
	I0520 03:45:21.595408    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-379700
	I0520 03:45:21.595408    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.595408    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.595481    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.596005    5028 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:45:21.596121    5028 kapi.go:59] client config for functional-379700: &rest.Config{Host:"https://172.25.247.13:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-379700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-379700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:45:21.596960    5028 addons.go:234] Setting addon default-storageclass=true in "functional-379700"
	W0520 03:45:21.596960    5028 addons.go:243] addon default-storageclass should already be in state true
	I0520 03:45:21.597489    5028 host.go:66] Checking if "functional-379700" exists ...
	I0520 03:45:21.598416    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:45:21.603432    5028 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 03:45:21.603980    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.603980    5028 round_trippers.go:580]     Audit-Id: d0519920-234a-4d01-a888-7285185b0b7f
	I0520 03:45:21.603980    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.603980    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.603980    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.603980    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.603980    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.604175    5028 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-379700","namespace":"kube-system","uid":"34876aed-79ab-4b82-afc1-d05cd777c4b1","resourceVersion":"586","creationTimestamp":"2024-05-20T10:42:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.mirror":"2a42312805dd790ecb411004595107ad","kubernetes.io/config.seen":"2024-05-20T10:42:15.164480141Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0520 03:45:21.614922    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:45:21.614922    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:21.619953    5028 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:45:21.624150    5028 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:45:21.624203    5028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:45:21.624203    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:45:21.787367    5028 request.go:629] Waited for 182.3021ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:21.787488    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes/functional-379700
	I0520 03:45:21.787488    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.787488    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.787587    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.791988    5028 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 03:45:21.791988    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.791988    5028 round_trippers.go:580]     Audit-Id: 3307cb1b-9d27-4f87-a852-90213bb69749
	I0520 03:45:21.791988    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.792098    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.792149    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.792149    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.792149    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.793290    5028 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-20T10:42:19Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0520 03:45:21.793806    5028 pod_ready.go:92] pod "kube-scheduler-functional-379700" in "kube-system" namespace has status "Ready":"True"
	I0520 03:45:21.793806    5028 pod_ready.go:81] duration metric: took 394.9863ms for pod "kube-scheduler-functional-379700" in "kube-system" namespace to be "Ready" ...
	I0520 03:45:21.793894    5028 pod_ready.go:38] duration metric: took 2.1461553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 03:45:21.793991    5028 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:45:21.807671    5028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:45:21.836168    5028 command_runner.go:130] > 5046
	I0520 03:45:21.836168    5028 api_server.go:72] duration metric: took 2.5802917s to wait for apiserver process to appear ...
	I0520 03:45:21.836292    5028 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:45:21.836292    5028 api_server.go:253] Checking apiserver healthz at https://172.25.247.13:8441/healthz ...
	I0520 03:45:21.843506    5028 api_server.go:279] https://172.25.247.13:8441/healthz returned 200:
	ok
	I0520 03:45:21.844538    5028 round_trippers.go:463] GET https://172.25.247.13:8441/version
	I0520 03:45:21.844631    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.844631    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.844631    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.849154    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:21.849154    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.849154    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.849297    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.849297    5028 round_trippers.go:580]     Content-Length: 263
	I0520 03:45:21.849297    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.849297    5028 round_trippers.go:580]     Audit-Id: c73a312f-f779-461d-b331-a42c1ba210df
	I0520 03:45:21.849297    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.849297    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:21.849297    5028 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 03:45:21.849429    5028 api_server.go:141] control plane version: v1.30.1
	I0520 03:45:21.849429    5028 api_server.go:131] duration metric: took 13.137ms to wait for apiserver health ...
	I0520 03:45:21.849429    5028 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 03:45:21.991518    5028 request.go:629] Waited for 141.7112ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:21.991518    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:21.991769    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:21.991769    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:21.991769    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:21.998845    5028 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 03:45:21.998845    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:21.998845    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:21.998845    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:21.998845    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:21 GMT
	I0520 03:45:21.998845    5028 round_trippers.go:580]     Audit-Id: e77df859-3575-4b4b-bf13-d739d2a2b277
	I0520 03:45:21.998845    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:21.998845    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:22.000389    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"591"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"580","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0520 03:45:22.003738    5028 system_pods.go:59] 7 kube-system pods found
	I0520 03:45:22.003818    5028 system_pods.go:61] "coredns-7db6d8ff4d-gn54n" [fcc9490b-312e-48f0-b0aa-687e3c005f39] Running
	I0520 03:45:22.003818    5028 system_pods.go:61] "etcd-functional-379700" [4cb21cbd-451f-44d5-9751-4ca6757c73fb] Running
	I0520 03:45:22.003818    5028 system_pods.go:61] "kube-apiserver-functional-379700" [0929729a-d7cf-423b-aa56-eda92cd65ca9] Running
	I0520 03:45:22.003818    5028 system_pods.go:61] "kube-controller-manager-functional-379700" [4bfc2761-c3ce-435c-a97c-c45e3bb52387] Running
	I0520 03:45:22.003818    5028 system_pods.go:61] "kube-proxy-dsfcm" [ef35b0df-375a-4dd9-8677-c61b5cb5691b] Running
	I0520 03:45:22.003818    5028 system_pods.go:61] "kube-scheduler-functional-379700" [34876aed-79ab-4b82-afc1-d05cd777c4b1] Running
	I0520 03:45:22.003891    5028 system_pods.go:61] "storage-provisioner" [6379f4b6-01e2-443a-8b28-13183f7119e2] Running
	I0520 03:45:22.003891    5028 system_pods.go:74] duration metric: took 154.4615ms to wait for pod list to return data ...
	I0520 03:45:22.003891    5028 default_sa.go:34] waiting for default service account to be created ...
	I0520 03:45:22.198735    5028 request.go:629] Waited for 194.5233ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/default/serviceaccounts
	I0520 03:45:22.198917    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/default/serviceaccounts
	I0520 03:45:22.198917    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:22.198917    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:22.198917    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:22.205884    5028 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 03:45:22.205884    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:22.205884    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:22 GMT
	I0520 03:45:22.205884    5028 round_trippers.go:580]     Audit-Id: 8cc8cb26-531a-4521-b996-6994fcc70130
	I0520 03:45:22.205884    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:22.205884    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:22.205884    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:22.205884    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:22.205884    5028 round_trippers.go:580]     Content-Length: 261
	I0520 03:45:22.205884    5028 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"591"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"be14fc0f-1e07-4d4b-8d1e-5b9298404b44","resourceVersion":"321","creationTimestamp":"2024-05-20T10:42:36Z"}}]}
	I0520 03:45:22.206704    5028 default_sa.go:45] found service account: "default"
	I0520 03:45:22.206803    5028 default_sa.go:55] duration metric: took 202.8605ms for default service account to be created ...
	I0520 03:45:22.206850    5028 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 03:45:22.386509    5028 request.go:629] Waited for 179.3754ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:22.386620    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/namespaces/kube-system/pods
	I0520 03:45:22.386620    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:22.386620    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:22.386620    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:22.395182    5028 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 03:45:22.395182    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:22.395182    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:22.395182    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:22.395182    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:22 GMT
	I0520 03:45:22.395182    5028 round_trippers.go:580]     Audit-Id: 43d7cb53-93c2-44bd-be10-7395143c2f77
	I0520 03:45:22.395182    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:22.395182    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:22.396370    5028 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"591"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gn54n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fcc9490b-312e-48f0-b0aa-687e3c005f39","resourceVersion":"580","creationTimestamp":"2024-05-20T10:42:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8a144192-b3d2-4b86-abaf-d0035997aef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T10:42:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a144192-b3d2-4b86-abaf-d0035997aef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0520 03:45:22.398737    5028 system_pods.go:86] 7 kube-system pods found
	I0520 03:45:22.398786    5028 system_pods.go:89] "coredns-7db6d8ff4d-gn54n" [fcc9490b-312e-48f0-b0aa-687e3c005f39] Running
	I0520 03:45:22.398786    5028 system_pods.go:89] "etcd-functional-379700" [4cb21cbd-451f-44d5-9751-4ca6757c73fb] Running
	I0520 03:45:22.398786    5028 system_pods.go:89] "kube-apiserver-functional-379700" [0929729a-d7cf-423b-aa56-eda92cd65ca9] Running
	I0520 03:45:22.398786    5028 system_pods.go:89] "kube-controller-manager-functional-379700" [4bfc2761-c3ce-435c-a97c-c45e3bb52387] Running
	I0520 03:45:22.398839    5028 system_pods.go:89] "kube-proxy-dsfcm" [ef35b0df-375a-4dd9-8677-c61b5cb5691b] Running
	I0520 03:45:22.398839    5028 system_pods.go:89] "kube-scheduler-functional-379700" [34876aed-79ab-4b82-afc1-d05cd777c4b1] Running
	I0520 03:45:22.398839    5028 system_pods.go:89] "storage-provisioner" [6379f4b6-01e2-443a-8b28-13183f7119e2] Running
	I0520 03:45:22.398839    5028 system_pods.go:126] duration metric: took 191.9886ms to wait for k8s-apps to be running ...
	I0520 03:45:22.398889    5028 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 03:45:22.412970    5028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:45:22.437754    5028 system_svc.go:56] duration metric: took 38.8648ms WaitForService to wait for kubelet
	I0520 03:45:22.438650    5028 kubeadm.go:576] duration metric: took 3.1827723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:45:22.438772    5028 node_conditions.go:102] verifying NodePressure condition ...
	I0520 03:45:22.588859    5028 request.go:629] Waited for 149.9955ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.247.13:8441/api/v1/nodes
	I0520 03:45:22.588943    5028 round_trippers.go:463] GET https://172.25.247.13:8441/api/v1/nodes
	I0520 03:45:22.588943    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:22.588943    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:22.588943    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:22.594559    5028 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 03:45:22.594559    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:22.595542    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:22.595542    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:22.595542    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:22 GMT
	I0520 03:45:22.595542    5028 round_trippers.go:580]     Audit-Id: 847f4ccc-5b40-4dc7-97d6-371e7ebd631d
	I0520 03:45:22.595542    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:22.595542    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:22.595542    5028 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"591"},"items":[{"metadata":{"name":"functional-379700","uid":"403ea6f7-5d70-4931-a63a-b234a0918ff6","resourceVersion":"508","creationTimestamp":"2024-05-20T10:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-379700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"functional-379700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T03_42_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0520 03:45:22.595542    5028 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 03:45:22.595542    5028 node_conditions.go:123] node cpu capacity is 2
	I0520 03:45:22.595542    5028 node_conditions.go:105] duration metric: took 156.7693ms to run NodePressure ...
	I0520 03:45:22.595542    5028 start.go:240] waiting for startup goroutines ...
	I0520 03:45:23.940329    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:45:23.940708    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:23.940841    5028 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:45:23.940894    5028 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:45:23.940988    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
	I0520 03:45:23.957958    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:45:23.957958    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:23.958625    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:45:26.283993    5028 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:45:26.284108    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:26.284171    5028 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:45:26.691044    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:45:26.691369    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:26.691632    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:45:26.836899    5028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:45:27.680173    5028 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0520 03:45:27.680173    5028 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0520 03:45:27.680261    5028 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0520 03:45:27.680261    5028 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0520 03:45:27.680261    5028 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0520 03:45:27.680261    5028 command_runner.go:130] > pod/storage-provisioner configured
	I0520 03:45:28.921690    5028 main.go:141] libmachine: [stdout =====>] : 172.25.247.13
	
	I0520 03:45:28.921690    5028 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:45:28.922404    5028 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
	I0520 03:45:29.095482    5028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:45:29.261826    5028 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0520 03:45:29.261826    5028 round_trippers.go:463] GET https://172.25.247.13:8441/apis/storage.k8s.io/v1/storageclasses
	I0520 03:45:29.261826    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:29.261826    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:29.261826    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:29.265832    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:29.266317    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:29.266317    5028 round_trippers.go:580]     Content-Length: 1273
	I0520 03:45:29.266317    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:29 GMT
	I0520 03:45:29.266317    5028 round_trippers.go:580]     Audit-Id: 44544a6e-73fb-45e0-95e6-f52e5e695b60
	I0520 03:45:29.266317    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:29.266317    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:29.266317    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:29.266408    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:29.266408    5028 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"standard","uid":"fca2790b-5717-47e8-9d24-c482ed8f33d2","resourceVersion":"400","creationTimestamp":"2024-05-20T10:42:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T10:42:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 03:45:29.267261    5028 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fca2790b-5717-47e8-9d24-c482ed8f33d2","resourceVersion":"400","creationTimestamp":"2024-05-20T10:42:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T10:42:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 03:45:29.267406    5028 round_trippers.go:463] PUT https://172.25.247.13:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 03:45:29.267406    5028 round_trippers.go:469] Request Headers:
	I0520 03:45:29.267406    5028 round_trippers.go:473]     Accept: application/json, */*
	I0520 03:45:29.267406    5028 round_trippers.go:473]     Content-Type: application/json
	I0520 03:45:29.267406    5028 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 03:45:29.271808    5028 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 03:45:29.271808    5028 round_trippers.go:577] Response Headers:
	I0520 03:45:29.271808    5028 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 622c5cf5-cd6f-4d88-8447-1294ec911197
	I0520 03:45:29.272088    5028 round_trippers.go:580]     Content-Length: 1220
	I0520 03:45:29.272088    5028 round_trippers.go:580]     Date: Mon, 20 May 2024 10:45:29 GMT
	I0520 03:45:29.272088    5028 round_trippers.go:580]     Audit-Id: fafa0dd1-2071-494b-80f3-0aa33ea75937
	I0520 03:45:29.272088    5028 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 03:45:29.272088    5028 round_trippers.go:580]     Content-Type: application/json
	I0520 03:45:29.272088    5028 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1639bf2-f8e8-47bb-ad46-7c9f0bc33090
	I0520 03:45:29.272281    5028 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fca2790b-5717-47e8-9d24-c482ed8f33d2","resourceVersion":"400","creationTimestamp":"2024-05-20T10:42:47Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T10:42:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 03:45:29.277198    5028 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 03:45:29.280315    5028 addons.go:505] duration metric: took 10.024653s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 03:45:29.280315    5028 start.go:245] waiting for cluster config update ...
	I0520 03:45:29.280315    5028 start.go:254] writing updated cluster config ...
	I0520 03:45:29.291860    5028 ssh_runner.go:195] Run: rm -f paused
	I0520 03:45:29.435978    5028 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 03:45:29.441197    5028 out.go:177] * Done! kubectl is now configured to use "functional-379700" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.865341539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.866338781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.965112592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.965481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.965741182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:07 functional-379700 dockerd[4268]: time="2024-05-20T10:45:07.966119536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.003814300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.004329469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.005130577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.005570636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 cri-dockerd[4486]: time="2024-05-20T10:45:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dc4a3e24a53cc0ecb4d71494ddfe592cb835fee9862dc5505016083b15a94879/resolv.conf as [nameserver 172.25.240.1]"
	May 20 10:45:08 functional-379700 cri-dockerd[4486]: time="2024-05-20T10:45:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b351e7a0fc55b6d9a3f07a12be6d10e5c1f81ee1ac52538bbb16af842f61cb71/resolv.conf as [nameserver 172.25.240.1]"
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.397577992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.397868231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.397912437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.398488714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.416351209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.416644648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.416843575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.417192622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 cri-dockerd[4486]: time="2024-05-20T10:45:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/793a79a22b60b1d432484c1d1a2872f9b9cc95e2398b430f80b05f54cfc20d31/resolv.conf as [nameserver 172.25.240.1]"
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.951523712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.951581920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.951593121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:45:08 functional-379700 dockerd[4268]: time="2024-05-20T10:45:08.951676733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cdde62619697       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   793a79a22b60b       coredns-7db6d8ff4d-gn54n
	1a4a3bd6df4c9       747097150317f       2 minutes ago       Running             kube-proxy                1                   b351e7a0fc55b       kube-proxy-dsfcm
	e4aacaf82b0a7       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   dc4a3e24a53cc       storage-provisioner
	f519f756dd7a3       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   1                   bd16ea7b9fbc4       kube-controller-manager-functional-379700
	5f7c8acb5fd73       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            1                   8ce1927696b16       kube-scheduler-functional-379700
	896c7b7b8a15f       91be940803172       2 minutes ago       Running             kube-apiserver            1                   1acf586e8c57e       kube-apiserver-functional-379700
	84f8204d68477       3861cfcd7c04c       2 minutes ago       Running             etcd                      1                   e1046f5e68449       etcd-functional-379700
	77efc1a400941       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   5ac570d8f0eb1       storage-provisioner
	e46c4e02b7880       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   65d0742b9b31d       coredns-7db6d8ff4d-gn54n
	4a07d02ae9e3c       747097150317f       4 minutes ago       Exited              kube-proxy                0                   3bb9059f94d3d       kube-proxy-dsfcm
	335ad06ad6f93       3861cfcd7c04c       5 minutes ago       Exited              etcd                      0                   1e5ba694a474b       etcd-functional-379700
	42fbcbecb6583       a52dc94f0a912       5 minutes ago       Exited              kube-scheduler            0                   26b9543603d32       kube-scheduler-functional-379700
	d69024fc6d5a7       91be940803172       5 minutes ago       Exited              kube-apiserver            0                   2697ab469a753       kube-apiserver-functional-379700
	46379da6626c0       25a1387cdab82       5 minutes ago       Exited              kube-controller-manager   0                   4d6e005742327       kube-controller-manager-functional-379700
	
	
	==> coredns [0cdde6261969] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52230 - 20061 "HINFO IN 4142989273844079910.8392753440559023593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037869454s
	
	
	==> coredns [e46c4e02b788] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1773176878]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:42:39.298) (total time: 30001ms):
	Trace[1773176878]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:43:09.299)
	Trace[1773176878]: [30.00109505s] [30.00109505s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[433436662]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:42:39.297) (total time: 30001ms):
	Trace[433436662]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:43:09.299)
	Trace[433436662]: [30.001393554s] [30.001393554s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[636895441]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:42:39.297) (total time: 30001ms):
	Trace[636895441]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:43:09.299)
	Trace[636895441]: [30.00172126s] [30.00172126s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46967 - 22782 "HINFO IN 8173382655774639845.5983126806994600221. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034215286s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-379700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-379700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=functional-379700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T03_42_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:42:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-379700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:47:08 +0000   Mon, 20 May 2024 10:42:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:47:08 +0000   Mon, 20 May 2024 10:42:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:47:08 +0000   Mon, 20 May 2024 10:42:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:47:08 +0000   Mon, 20 May 2024 10:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.247.13
	  Hostname:    functional-379700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 0564ad4fd5a14cd4801db2c85b4c2c9f
	  System UUID:                5e5f0a53-3ed9-6549-8da1-86b0f5509305
	  Boot ID:                    2125f795-c0d5-4b91-8c24-c69da96aeb67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gn54n                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m40s
	  kube-system                 etcd-functional-379700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-apiserver-functional-379700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-functional-379700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-dsfcm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-scheduler-functional-379700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m54s                  kubelet          Node functional-379700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s                  kubelet          Node functional-379700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s                  kubelet          Node functional-379700 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m50s                  kubelet          Node functional-379700 status is now: NodeReady
	  Normal  RegisteredNode           4m41s                  node-controller  Node functional-379700 event: Registered Node functional-379700 in Controller
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node functional-379700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node functional-379700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node functional-379700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                   node-controller  Node functional-379700 event: Registered Node functional-379700 in Controller
	
	
	==> dmesg <==
	[  +5.455889] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.721786] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +7.988616] systemd-fstab-generator[1742]: Ignoring "noauto" option for root device
	[  +0.109207] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.548083] systemd-fstab-generator[2145]: Ignoring "noauto" option for root device
	[  +0.126065] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.392675] systemd-fstab-generator[2379]: Ignoring "noauto" option for root device
	[  +0.287834] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.397083] kauditd_printk_skb: 88 callbacks suppressed
	[May20 10:43] kauditd_printk_skb: 10 callbacks suppressed
	[May20 10:44] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.679136] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.295741] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.303555] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +5.278250] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.093462] systemd-fstab-generator[4439]: Ignoring "noauto" option for root device
	[  +0.207559] systemd-fstab-generator[4451]: Ignoring "noauto" option for root device
	[  +0.204065] systemd-fstab-generator[4463]: Ignoring "noauto" option for root device
	[  +0.274055] systemd-fstab-generator[4478]: Ignoring "noauto" option for root device
	[  +0.894594] systemd-fstab-generator[4634]: Ignoring "noauto" option for root device
	[May20 10:45] systemd-fstab-generator[4749]: Ignoring "noauto" option for root device
	[  +0.123300] kauditd_printk_skb: 140 callbacks suppressed
	[  +6.525891] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.804558] systemd-fstab-generator[5657]: Ignoring "noauto" option for root device
	[  +0.203042] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [335ad06ad6f9] <==
	{"level":"info","ts":"2024-05-20T10:42:17.624996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T10:42:17.625007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c received MsgVoteResp from 14a7b64a61c3677c at term 2"}
	{"level":"info","ts":"2024-05-20T10:42:17.625017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c became leader at term 2"}
	{"level":"info","ts":"2024-05-20T10:42:17.625042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14a7b64a61c3677c elected leader 14a7b64a61c3677c at term 2"}
	{"level":"info","ts":"2024-05-20T10:42:17.635914Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:42:17.642269Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"14a7b64a61c3677c","local-member-attributes":"{Name:functional-379700 ClientURLs:[https://172.25.247.13:2379]}","request-path":"/0/members/14a7b64a61c3677c/attributes","cluster-id":"1a92e52e8cc505cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T10:42:17.642499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:42:17.64318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:42:17.648446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.247.13:2379"}
	{"level":"info","ts":"2024-05-20T10:42:17.649881Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T10:42:17.652106Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T10:42:17.662975Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a92e52e8cc505cb","local-member-id":"14a7b64a61c3677c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:42:17.663184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:42:17.66326Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:42:17.664243Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T10:44:42.599174Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T10:44:42.599269Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-379700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.247.13:2380"],"advertise-client-urls":["https://172.25.247.13:2379"]}
	{"level":"warn","ts":"2024-05-20T10:44:42.59935Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:44:42.599596Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:44:42.647618Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.25.247.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:44:42.647671Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.25.247.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T10:44:42.649094Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"14a7b64a61c3677c","current-leader-member-id":"14a7b64a61c3677c"}
	{"level":"info","ts":"2024-05-20T10:44:42.656488Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.25.247.13:2380"}
	{"level":"info","ts":"2024-05-20T10:44:42.656594Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.25.247.13:2380"}
	{"level":"info","ts":"2024-05-20T10:44:42.65664Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-379700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.247.13:2380"],"advertise-client-urls":["https://172.25.247.13:2379"]}
	
	
	==> etcd [84f8204d6847] <==
	{"level":"info","ts":"2024-05-20T10:45:02.795491Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T10:45:02.7955Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T10:45:02.795728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c switched to configuration voters=(1488358632453269372)"}
	{"level":"info","ts":"2024-05-20T10:45:02.795779Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a92e52e8cc505cb","local-member-id":"14a7b64a61c3677c","added-peer-id":"14a7b64a61c3677c","added-peer-peer-urls":["https://172.25.247.13:2380"]}
	{"level":"info","ts":"2024-05-20T10:45:02.795972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a92e52e8cc505cb","local-member-id":"14a7b64a61c3677c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:45:02.796007Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:45:02.808887Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T10:45:02.809102Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"14a7b64a61c3677c","initial-advertise-peer-urls":["https://172.25.247.13:2380"],"listen-peer-urls":["https://172.25.247.13:2380"],"advertise-client-urls":["https://172.25.247.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.247.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T10:45:02.809125Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T10:45:02.809232Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.247.13:2380"}
	{"level":"info","ts":"2024-05-20T10:45:02.809242Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.247.13:2380"}
	{"level":"info","ts":"2024-05-20T10:45:04.040448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T10:45:04.040855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T10:45:04.041075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c received MsgPreVoteResp from 14a7b64a61c3677c at term 2"}
	{"level":"info","ts":"2024-05-20T10:45:04.043459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:45:04.043654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c received MsgVoteResp from 14a7b64a61c3677c at term 3"}
	{"level":"info","ts":"2024-05-20T10:45:04.04392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14a7b64a61c3677c became leader at term 3"}
	{"level":"info","ts":"2024-05-20T10:45:04.044055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14a7b64a61c3677c elected leader 14a7b64a61c3677c at term 3"}
	{"level":"info","ts":"2024-05-20T10:45:04.05878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:45:04.071052Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.247.13:2379"}
	{"level":"info","ts":"2024-05-20T10:45:04.086696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:45:04.087697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T10:45:04.087784Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T10:45:04.058734Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"14a7b64a61c3677c","local-member-attributes":"{Name:functional-379700 ClientURLs:[https://172.25.247.13:2379]}","request-path":"/0/members/14a7b64a61c3677c/attributes","cluster-id":"1a92e52e8cc505cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T10:45:04.144748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:47:17 up 7 min,  0 users,  load average: 0.71, 0.63, 0.33
	Linux functional-379700 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [896c7b7b8a15] <==
	I0520 10:45:06.399137       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 10:45:06.399182       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 10:45:06.399612       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 10:45:06.399989       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 10:45:06.400543       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 10:45:06.400824       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 10:45:06.401317       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 10:45:06.401364       1 aggregator.go:165] initial CRD sync complete...
	I0520 10:45:06.401437       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 10:45:06.401450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 10:45:06.401457       1 cache.go:39] Caches are synced for autoregister controller
	I0520 10:45:06.409023       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0520 10:45:06.412875       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0520 10:45:06.419226       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 10:45:06.420924       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 10:45:06.421021       1 policy_source.go:224] refreshing policies
	I0520 10:45:06.437149       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 10:45:07.213031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 10:45:08.286732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 10:45:08.324276       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 10:45:08.447570       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 10:45:08.564198       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 10:45:08.584773       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 10:45:19.602069       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 10:45:19.671698       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d69024fc6d5a] <==
	W0520 10:44:51.786338       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.801009       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.837968       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.909596       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.909596       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.947102       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.947948       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.958667       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.965698       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.972335       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:51.995915       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.054648       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.236630       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.240235       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.247715       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.266168       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.332105       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.335602       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.339684       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.340176       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.400978       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.402568       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.416226       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.487111       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0520 10:44:52.629893       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [46379da6626c] <==
	I0520 10:42:36.192062       1 shared_informer.go:320] Caches are synced for PV protection
	I0520 10:42:36.193304       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 10:42:36.229367       1 shared_informer.go:320] Caches are synced for disruption
	I0520 10:42:36.282014       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:42:36.346962       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:42:36.364839       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 10:42:36.787270       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:42:36.812896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:42:36.812933       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 10:42:37.315349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="406.422883ms"
	I0520 10:42:37.350175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.764303ms"
	I0520 10:42:37.350509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.399µs"
	I0520 10:42:37.371737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="304.596µs"
	I0520 10:42:38.970308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="134.967772ms"
	I0520 10:42:39.044896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.10891ms"
	I0520 10:42:39.045019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.001µs"
	I0520 10:42:39.046172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.901µs"
	I0520 10:42:39.763265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.201µs"
	I0520 10:42:39.850527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.502µs"
	I0520 10:42:49.610894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.301µs"
	I0520 10:42:49.924773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.101µs"
	I0520 10:42:49.951369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.801µs"
	I0520 10:42:49.960430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.4µs"
	I0520 10:43:17.649543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.947088ms"
	I0520 10:43:17.651416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40µs"
	
	
	==> kube-controller-manager [f519f756dd7a] <==
	I0520 10:45:19.419048       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 10:45:19.423662       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0520 10:45:19.426128       1 shared_informer.go:320] Caches are synced for node
	I0520 10:45:19.428191       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0520 10:45:19.432233       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0520 10:45:19.434219       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0520 10:45:19.434747       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0520 10:45:19.435642       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0520 10:45:19.436315       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0520 10:45:19.436626       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0520 10:45:19.441517       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0520 10:45:19.469179       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 10:45:19.504089       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 10:45:19.533983       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 10:45:19.537048       1 shared_informer.go:320] Caches are synced for daemon sets
	I0520 10:45:19.546232       1 shared_informer.go:320] Caches are synced for taint
	I0520 10:45:19.546701       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0520 10:45:19.546989       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-379700"
	I0520 10:45:19.547312       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 10:45:19.575411       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:45:19.580258       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:45:19.585608       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 10:45:19.995295       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:45:20.066290       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:45:20.066460       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1a4a3bd6df4c] <==
	I0520 10:45:08.732881       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:45:08.766543       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.247.13"]
	I0520 10:45:08.835121       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:45:08.835282       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:45:08.835304       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:45:08.839713       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:45:08.840342       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:45:08.840490       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:45:08.843615       1 config.go:192] "Starting service config controller"
	I0520 10:45:08.844457       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:45:08.844840       1 config.go:319] "Starting node config controller"
	I0520 10:45:08.845025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:45:08.845644       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:45:08.846697       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:45:08.945994       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:45:08.945994       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:45:08.947159       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4a07d02ae9e3] <==
	I0520 10:42:39.326530       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:42:39.354758       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.247.13"]
	I0520 10:42:39.409388       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:42:39.409560       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:42:39.409647       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:42:39.414542       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:42:39.414753       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:42:39.414770       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:42:39.416311       1 config.go:319] "Starting node config controller"
	I0520 10:42:39.416591       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:42:39.416638       1 config.go:192] "Starting service config controller"
	I0520 10:42:39.419933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:42:39.416653       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:42:39.424989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:42:39.424998       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:42:39.517727       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:42:39.525489       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [42fbcbecb658] <==
	E0520 10:42:20.890612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:42:20.910179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:42:20.910232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:42:20.993123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:42:20.993182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:42:21.036961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:42:21.037924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:42:21.038153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:42:21.038193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 10:42:21.163482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:42:21.163869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:42:21.218388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:42:21.218896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:42:21.259841       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:42:21.260073       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:42:21.340937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:42:21.341010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:42:21.377053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:42:21.377726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:42:21.444858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:42:21.444995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:42:21.451021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:42:21.451470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 10:42:23.045613       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 10:44:42.597178       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5f7c8acb5fd7] <==
	I0520 10:45:04.576248       1 serving.go:380] Generated self-signed cert in-memory
	W0520 10:45:06.311198       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 10:45:06.311543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:45:06.311641       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 10:45:06.311793       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 10:45:06.342348       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 10:45:06.342412       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:45:06.345253       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 10:45:06.345284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 10:45:06.348751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 10:45:06.351435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0520 10:45:06.359882       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 10:45:06.359941       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	I0520 10:45:07.845921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:45:06 functional-379700 kubelet[4756]: I0520 10:45:06.500800    4756 kubelet_node_status.go:112] "Node was previously registered" node="functional-379700"
	May 20 10:45:06 functional-379700 kubelet[4756]: I0520 10:45:06.500968    4756 kubelet_node_status.go:76] "Successfully registered node" node="functional-379700"
	May 20 10:45:06 functional-379700 kubelet[4756]: I0520 10:45:06.503222    4756 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 10:45:06 functional-379700 kubelet[4756]: I0520 10:45:06.504125    4756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.214613    4756 apiserver.go:52] "Watching apiserver"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.227241    4756 topology_manager.go:215] "Topology Admit Handler" podUID="ef35b0df-375a-4dd9-8677-c61b5cb5691b" podNamespace="kube-system" podName="kube-proxy-dsfcm"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.227477    4756 topology_manager.go:215] "Topology Admit Handler" podUID="fcc9490b-312e-48f0-b0aa-687e3c005f39" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gn54n"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.227556    4756 topology_manager.go:215] "Topology Admit Handler" podUID="6379f4b6-01e2-443a-8b28-13183f7119e2" podNamespace="kube-system" podName="storage-provisioner"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.236013    4756 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.331476    4756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6379f4b6-01e2-443a-8b28-13183f7119e2-tmp\") pod \"storage-provisioner\" (UID: \"6379f4b6-01e2-443a-8b28-13183f7119e2\") " pod="kube-system/storage-provisioner"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.331546    4756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef35b0df-375a-4dd9-8677-c61b5cb5691b-xtables-lock\") pod \"kube-proxy-dsfcm\" (UID: \"ef35b0df-375a-4dd9-8677-c61b5cb5691b\") " pod="kube-system/kube-proxy-dsfcm"
	May 20 10:45:07 functional-379700 kubelet[4756]: I0520 10:45:07.331571    4756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef35b0df-375a-4dd9-8677-c61b5cb5691b-lib-modules\") pod \"kube-proxy-dsfcm\" (UID: \"ef35b0df-375a-4dd9-8677-c61b5cb5691b\") " pod="kube-system/kube-proxy-dsfcm"
	May 20 10:45:08 functional-379700 kubelet[4756]: I0520 10:45:08.645636    4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793a79a22b60b1d432484c1d1a2872f9b9cc95e2398b430f80b05f54cfc20d31"
	May 20 10:45:10 functional-379700 kubelet[4756]: I0520 10:45:10.752369    4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 10:45:15 functional-379700 kubelet[4756]: I0520 10:45:15.555952    4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 10:46:01 functional-379700 kubelet[4756]: E0520 10:46:01.366216    4756 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:46:01 functional-379700 kubelet[4756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:46:01 functional-379700 kubelet[4756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:46:01 functional-379700 kubelet[4756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:46:01 functional-379700 kubelet[4756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:47:01 functional-379700 kubelet[4756]: E0520 10:47:01.360545    4756 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:47:01 functional-379700 kubelet[4756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:47:01 functional-379700 kubelet[4756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:47:01 functional-379700 kubelet[4756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:47:01 functional-379700 kubelet[4756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [77efc1a40094] <==
	I0520 10:42:46.573204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:42:46.585099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:42:46.585883       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:42:46.610307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:42:46.610618       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-379700_ade68f7b-8a51-444e-a1f4-30481ba682e6!
	I0520 10:42:46.611289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c6664150-6713-470f-87d0-34badce069ba", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-379700_ade68f7b-8a51-444e-a1f4-30481ba682e6 became leader
	I0520 10:42:46.715897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-379700_ade68f7b-8a51-444e-a1f4-30481ba682e6!
	
	
	==> storage-provisioner [e4aacaf82b0a] <==
	I0520 10:45:08.634761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:45:08.665225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:45:08.665623       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:45:26.088334       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:45:26.089001       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-379700_7380db90-3663-46d7-afcd-31cffda47400!
	I0520 10:45:26.090452       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c6664150-6713-470f-87d0-34badce069ba", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-379700_7380db90-3663-46d7-afcd-31cffda47400 became leader
	I0520 10:45:26.189264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-379700_7380db90-3663-46d7-afcd-31cffda47400!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:47:09.251677    3312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-379700 -n functional-379700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-379700 -n functional-379700: (12.8633178s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-379700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (35.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config unset cpus" to be -""- but got *"W0520 03:50:25.165083   14880 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 config get cpus: exit status 14 (174.9146ms)

                                                
                                                
** stderr ** 
	W0520 03:50:25.366012   13808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0520 03:50:25.366012   13808 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0520 03:50:25.547308    5292 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config get cpus" to be -""- but got *"W0520 03:50:25.723532   15160 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config unset cpus" to be -""- but got *"W0520 03:50:25.913156    6612 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 config get cpus: exit status 14 (178.3603ms)

                                                
                                                
** stderr ** 
	W0520 03:50:26.102530   13140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-379700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0520 03:50:26.102530   13140 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 service --namespace=default --https --url hello-node: exit status 1 (15.0217859s)

                                                
                                                
** stderr ** 
	W0520 03:51:12.970005    3004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-379700 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url --format={{.IP}}: exit status 1 (15.0267053s)

                                                
                                                
** stderr ** 
	W0520 03:51:27.977469   12648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url: exit status 1 (15.0324319s)

                                                
                                                
** stderr ** 
	W0520 03:51:43.026514    2524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-379700 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- sh -c "ping -c 1 172.25.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- sh -c "ping -c 1 172.25.240.1": exit status 1 (10.4558342s)

                                                
                                                
-- stdout --
	PING 172.25.240.1 (172.25.240.1): 56 data bytes
	
	--- 172.25.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:09:37.058815    4120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.240.1) from pod (busybox-fc5497c4f-bghlc): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- sh -c "ping -c 1 172.25.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- sh -c "ping -c 1 172.25.240.1": exit status 1 (10.4583629s)

                                                
                                                
-- stdout --
	PING 172.25.240.1 (172.25.240.1): 56 data bytes
	
	--- 172.25.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:09:47.992042    4792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.240.1) from pod (busybox-fc5497c4f-mw76w): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- sh -c "ping -c 1 172.25.240.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- sh -c "ping -c 1 172.25.240.1": exit status 1 (10.4484525s)

                                                
                                                
-- stdout --
	PING 172.25.240.1 (172.25.240.1): 56 data bytes
	
	--- 172.25.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:09:58.906848    9564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.240.1) from pod (busybox-fc5497c4f-qxg28): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-291700 -n ha-291700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-291700 -n ha-291700: (12.9043501s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 logs -n 25
E0520 04:10:25.048292    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 logs -n 25: (9.2586703s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-379700                    | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:55 PDT | 20 May 24 03:55 PDT |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-379700 image build -t     | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:55 PDT | 20 May 24 03:55 PDT |
	|         | localhost/my-image:functional-379700 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-379700 image ls           | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:55 PDT | 20 May 24 03:55 PDT |
	| delete  | -p functional-379700                 | functional-379700 | minikube1\jenkins | v1.33.1 | 20 May 24 03:56 PDT | 20 May 24 03:57 PDT |
	| start   | -p ha-291700 --wait=true             | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 03:57 PDT | 20 May 24 04:08 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- apply -f             | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- rollout status       | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- get pods -o          | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- get pods -o          | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-bghlc --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-mw76w --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-qxg28 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-bghlc --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-mw76w --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-qxg28 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-bghlc -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-mw76w -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-qxg28 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- get pods -o          | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-bghlc              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT |                     |
	|         | busybox-fc5497c4f-bghlc -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-mw76w              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT |                     |
	|         | busybox-fc5497c4f-mw76w -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT | 20 May 24 04:09 PDT |
	|         | busybox-fc5497c4f-qxg28              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-291700 -- exec                 | ha-291700         | minikube1\jenkins | v1.33.1 | 20 May 24 04:09 PDT |                     |
	|         | busybox-fc5497c4f-qxg28 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:57:11
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:57:11.677213    8140 out.go:291] Setting OutFile to fd 1060 ...
	I0520 03:57:11.677839    8140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:57:11.677839    8140 out.go:304] Setting ErrFile to fd 1372...
	I0520 03:57:11.677839    8140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:57:11.705674    8140 out.go:298] Setting JSON to false
	I0520 03:57:11.709708    8140 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2628,"bootTime":1716200003,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:57:11.709708    8140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:57:11.713513    8140 out.go:177] * [ha-291700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:57:11.719705    8140 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:57:11.719705    8140 notify.go:220] Checking for updates...
	I0520 03:57:11.724317    8140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:57:11.727701    8140 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:57:11.730419    8140 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:57:11.734212    8140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:57:11.737254    8140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:57:17.221655    8140 out.go:177] * Using the hyperv driver based on user configuration
	I0520 03:57:17.224692    8140 start.go:297] selected driver: hyperv
	I0520 03:57:17.224692    8140 start.go:901] validating driver "hyperv" against <nil>
	I0520 03:57:17.224692    8140 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:57:17.272938    8140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:57:17.273804    8140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:57:17.274388    8140 cni.go:84] Creating CNI manager for ""
	I0520 03:57:17.274388    8140 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 03:57:17.274388    8140 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 03:57:17.274591    8140 start.go:340] cluster config:
	{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:57:17.274591    8140 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:57:17.278673    8140 out.go:177] * Starting "ha-291700" primary control-plane node in "ha-291700" cluster
	I0520 03:57:17.280379    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:57:17.281341    8140 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 03:57:17.281341    8140 cache.go:56] Caching tarball of preloaded images
	I0520 03:57:17.281573    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 03:57:17.281878    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:57:17.282058    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 03:57:17.282642    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json: {Name:mk4e8fabedba09636c589d5d4a21388cc33f4a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:57:17.283666    8140 start.go:360] acquireMachinesLock for ha-291700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:57:17.283666    8140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-291700"
	I0520 03:57:17.283666    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:57:17.284319    8140 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 03:57:17.288063    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:57:17.288423    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 03:57:17.288545    8140 client.go:168] LocalClient.Create starting
	I0520 03:57:17.289279    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 03:57:17.289559    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 03:57:17.289559    8140 main.go:141] libmachine: Parsing certificate...
	I0520 03:57:17.290028    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 03:57:17.290262    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 03:57:17.290293    8140 main.go:141] libmachine: Parsing certificate...
	I0520 03:57:17.290419    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 03:57:19.388587    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 03:57:19.388674    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:19.388674    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 03:57:21.177001    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 03:57:21.177053    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:21.177053    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:57:22.743473    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:57:22.744562    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:22.744562    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:57:26.400485    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:57:26.400936    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:26.403588    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 03:57:26.875493    8140 main.go:141] libmachine: Creating SSH key...
	I0520 03:57:26.983659    8140 main.go:141] libmachine: Creating VM...
	I0520 03:57:26.983735    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:57:29.860829    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:57:29.860829    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:29.861271    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 03:57:29.861369    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:57:31.653955    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:57:31.654198    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:31.654286    8140 main.go:141] libmachine: Creating VHD
	I0520 03:57:31.654410    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 03:57:35.460910    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 19701806-2E15-4246-8309-72CFDE92B7AC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 03:57:35.461007    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:35.461007    8140 main.go:141] libmachine: Writing magic tar header
	I0520 03:57:35.461096    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 03:57:35.469078    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd' -SizeBytes 20000MB
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 03:57:44.953673    8140 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-291700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 03:57:44.953673    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:44.954648    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700 -DynamicMemoryEnabled $false
	I0520 03:57:47.257963    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:47.259006    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:47.259006    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700 -Count 2
	I0520 03:57:49.464266    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:49.464266    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:49.465432    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\boot2docker.iso'
	I0520 03:57:52.063199    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:52.063484    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:52.063568    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd'
	I0520 03:57:54.772593    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:54.772593    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:54.772774    8140 main.go:141] libmachine: Starting VM...
	I0520 03:57:54.772774    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700
	I0520 03:57:57.880530    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:57.880530    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:57.880718    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 03:57:57.880718    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:00.269802    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:00.269870    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:00.269870    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:02.916160    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:02.916160    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:03.922805    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:06.234981    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:06.235601    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:06.235681    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:08.944169    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:08.944169    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:09.947903    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:12.311823    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:12.312580    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:12.312642    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:14.963966    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:14.963966    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:15.970824    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:18.255575    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:18.256185    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:18.256185    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:20.909225    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:20.909225    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:21.914808    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:24.240165    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:24.240570    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:24.240682    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:26.888133    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:26.888133    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:26.888329    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:29.109528    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:29.109528    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:29.109528    8140 machine.go:94] provisionDockerMachine start ...
	I0520 03:58:29.110115    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:31.362984    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:31.362984    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:31.363763    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:34.045026    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:34.045839    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:34.051489    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:34.061724    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:34.061724    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:58:34.194737    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 03:58:34.194838    8140 buildroot.go:166] provisioning hostname "ha-291700"
	I0520 03:58:34.194989    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:36.406785    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:36.407870    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:36.407918    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:39.052577    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:39.053314    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:39.059484    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:39.060118    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:39.060118    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700 && echo "ha-291700" | sudo tee /etc/hostname
	I0520 03:58:39.230994    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700
	
	I0520 03:58:39.231594    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:41.434020    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:41.434020    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:41.434454    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:44.137380    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:44.137996    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:44.143306    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:44.143520    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:44.143520    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:58:44.300320    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:58:44.300320    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 03:58:44.300320    8140 buildroot.go:174] setting up certificates
	I0520 03:58:44.300320    8140 provision.go:84] configureAuth start
	I0520 03:58:44.301352    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:46.542389    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:46.542389    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:46.543329    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:51.489627    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:51.490636    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:51.490694    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:54.137675    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:54.137675    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:54.137754    8140 provision.go:143] copyHostCerts
	I0520 03:58:54.137829    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 03:58:54.138281    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 03:58:54.138374    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 03:58:54.138766    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 03:58:54.140130    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 03:58:54.140383    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 03:58:54.140479    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 03:58:54.140926    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 03:58:54.142035    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 03:58:54.142358    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 03:58:54.142358    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 03:58:54.143030    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 03:58:54.144098    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700 san=[127.0.0.1 172.25.246.119 ha-291700 localhost minikube]
	I0520 03:58:54.308063    8140 provision.go:177] copyRemoteCerts
	I0520 03:58:54.322456    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:58:54.322456    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:56.570920    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:56.571457    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:56.571457    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:59.199980    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:59.199980    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:59.201119    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:58:59.312533    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9900239s)
	I0520 03:58:59.312607    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 03:58:59.313068    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:58:59.359134    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 03:58:59.359667    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 03:58:59.407395    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 03:58:59.408103    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 03:58:59.454516    8140 provision.go:87] duration metric: took 15.1540601s to configureAuth
	I0520 03:58:59.454589    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:58:59.455128    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:58:59.455188    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:01.726893    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:01.727004    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:01.727004    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:04.346376    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:04.346437    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:04.352089    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:04.352803    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:04.352803    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:59:04.498788    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:59:04.498788    8140 buildroot.go:70] root file system type: tmpfs
	I0520 03:59:04.499424    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:59:04.499522    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:09.398058    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:09.398058    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:09.406499    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:09.406499    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:09.406499    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:59:09.561856    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:59:09.562009    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:11.776325    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:11.776658    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:11.776774    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:14.423430    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:14.424462    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:14.432173    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:14.432951    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:14.432951    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:59:16.600656    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 03:59:16.600656    8140 machine.go:97] duration metric: took 47.4910616s to provisionDockerMachine
	I0520 03:59:16.600656    8140 client.go:171] duration metric: took 1m59.3119451s to LocalClient.Create
	I0520 03:59:16.600656    8140 start.go:167] duration metric: took 1m59.312067s to libmachine.API.Create "ha-291700"
	I0520 03:59:16.600656    8140 start.go:293] postStartSetup for "ha-291700" (driver="hyperv")
	I0520 03:59:16.600656    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:59:16.614805    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:59:16.614805    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:18.812218    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:18.812218    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:18.813029    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:21.427905    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:21.427932    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:21.428102    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:21.531803    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9169304s)
	I0520 03:59:21.545657    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:59:21.551658    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 03:59:21.551658    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 03:59:21.551658    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 03:59:21.552652    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 03:59:21.552652    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 03:59:21.566784    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 03:59:21.585677    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 03:59:21.632018    8140 start.go:296] duration metric: took 5.0313549s for postStartSetup
	I0520 03:59:21.635809    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:23.831384    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:23.831384    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:23.831482    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:26.386891    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:26.386891    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:26.387346    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 03:59:26.390814    8140 start.go:128] duration metric: took 2m9.1063147s to createHost
	I0520 03:59:26.390889    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:28.587881    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:28.588770    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:28.588770    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:31.237127    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:31.237127    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:31.244708    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:31.245231    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:31.245231    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 03:59:31.376807    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202771.368877286
	
	I0520 03:59:31.376807    8140 fix.go:216] guest clock: 1716202771.368877286
	I0520 03:59:31.376807    8140 fix.go:229] Guest: 2024-05-20 03:59:31.368877286 -0700 PDT Remote: 2024-05-20 03:59:26.3908896 -0700 PDT m=+134.806739301 (delta=4.977987686s)
	I0520 03:59:31.376955    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:33.573407    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:33.573656    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:33.573719    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:36.190733    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:36.191756    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:36.197917    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:36.198076    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:36.198076    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716202771
	I0520 03:59:36.348790    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 10:59:31 UTC 2024
	
	I0520 03:59:36.348840    8140 fix.go:236] clock set: Mon May 20 10:59:31 UTC 2024
	 (err=<nil>)
	I0520 03:59:36.348840    8140 start.go:83] releasing machines lock for "ha-291700", held for 2m19.0649801s
	I0520 03:59:36.348840    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:41.127415    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:41.127415    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:41.133384    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:59:41.133563    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:41.145884    8140 ssh_runner.go:195] Run: cat /version.json
	I0520 03:59:41.145884    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:43.405200    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:43.405696    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:43.405813    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:43.430907    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:43.430907    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:43.431364    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:46.128106    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:46.128904    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:46.128904    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:46.155967    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:46.155967    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:46.156878    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:46.217549    8140 ssh_runner.go:235] Completed: cat /version.json: (5.0714936s)
	W0520 03:59:46.217549    8140 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 03:59:46.233107    8140 ssh_runner.go:195] Run: systemctl --version
	I0520 03:59:46.455694    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3222461s)
	I0520 03:59:46.469869    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:59:46.481363    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:59:46.493423    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 03:59:46.523897    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:59:46.524010    8140 start.go:494] detecting cgroup driver to use...
	I0520 03:59:46.524266    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:59:46.579011    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 03:59:46.621241    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:59:46.641381    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:59:46.654660    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:59:46.687899    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:59:46.722355    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:59:46.753932    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:59:46.789101    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:59:46.820349    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:59:46.857410    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:59:46.891315    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:59:46.921362    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:59:46.951780    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:59:46.981382    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:47.185079    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:59:47.226791    8140 start.go:494] detecting cgroup driver to use...
	I0520 03:59:47.238769    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:59:47.287770    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:59:47.327982    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:59:47.375428    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:59:47.409377    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:59:47.445178    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 03:59:47.512735    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:59:47.537813    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:59:47.581583    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:59:47.601562    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:59:47.619358    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:59:47.664221    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:59:47.865427    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:59:48.053930    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:59:48.054662    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:59:48.107658    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:48.307646    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:59:50.815240    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5075901s)
	I0520 03:59:50.830523    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:59:50.866822    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:59:50.907752    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:59:51.112012    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:59:51.323078    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:51.558474    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:59:51.611482    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:59:51.653346    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:51.864954    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:59:51.973953    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:59:51.987354    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:59:51.997139    8140 start.go:562] Will wait 60s for crictl version
	I0520 03:59:52.008621    8140 ssh_runner.go:195] Run: which crictl
	I0520 03:59:52.030577    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:59:52.082749    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 03:59:52.093416    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:59:52.131361    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:59:52.164390    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 03:59:52.164390    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 03:59:52.172255    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 03:59:52.172882    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 03:59:52.185372    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 03:59:52.191178    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:59:52.226323    8140 kubeadm.go:877] updating cluster {Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 03:59:52.226323    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:59:52.236471    8140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:59:52.258126    8140 docker.go:685] Got preloaded images: 
	I0520 03:59:52.258204    8140 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 03:59:52.271911    8140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:59:52.304744    8140 ssh_runner.go:195] Run: which lz4
	I0520 03:59:52.310287    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 03:59:52.323193    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 03:59:52.329541    8140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:59:52.329541    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 03:59:54.317831    8140 docker.go:649] duration metric: took 2.0075404s to copy over tarball
	I0520 03:59:54.330919    8140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:00:02.828970    8140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4979683s)
	I0520 04:00:02.829041    8140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:00:02.896932    8140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:00:02.916375    8140 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 04:00:02.958781    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:00:03.183835    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:00:06.258501    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0746618s)
	I0520 04:00:06.269817    8140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:00:06.295748    8140 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:00:06.295827    8140 cache_images.go:84] Images are preloaded, skipping loading
	I0520 04:00:06.295827    8140 kubeadm.go:928] updating node { 172.25.246.119 8443 v1.30.1 docker true true} ...
	I0520 04:00:06.295902    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.246.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:00:06.305879    8140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:00:06.341542    8140 cni.go:84] Creating CNI manager for ""
	I0520 04:00:06.341542    8140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:00:06.341691    8140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:00:06.341691    8140 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.246.119 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-291700 NodeName:ha-291700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.246.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.246.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:00:06.341691    8140 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.246.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-291700"
	  kubeletExtraArgs:
	    node-ip: 172.25.246.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.246.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:00:06.341691    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:00:06.356452    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:00:06.382611    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:00:06.382866    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:00:06.395268    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:00:06.409421    8140 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:00:06.422174    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 04:00:06.438956    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0520 04:00:06.468446    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:00:06.498238    8140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 04:00:06.528812    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 04:00:06.569522    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:00:06.576249    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:00:06.612665    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:00:06.812820    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:00:06.843672    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.246.119
	I0520 04:00:06.843672    8140 certs.go:194] generating shared ca certs ...
	I0520 04:00:06.843672    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.844502    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:00:06.845096    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:00:06.845293    8140 certs.go:256] generating profile certs ...
	I0520 04:00:06.846114    8140 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:00:06.846239    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt with IP's: []
	I0520 04:00:06.980779    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt ...
	I0520 04:00:06.980779    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt: {Name:mkd2c14963adb4751d3090614d567f51986ff21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.983103    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key ...
	I0520 04:00:06.983103    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key: {Name:mk948fe68dbd2be6fca73a1daf0e8449e029c49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.983586    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f
	I0520 04:00:06.984697    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.255.254]
	I0520 04:00:07.127611    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f ...
	I0520 04:00:07.127611    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f: {Name:mk23cd13457bf6593f20ed27ae2e0a814b85ab74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.129254    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f ...
	I0520 04:00:07.129254    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f: {Name:mk4ed1c6beba67aa83ee8f47f02b788d813ee85d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.129840    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:00:07.140915    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:00:07.142950    8140 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:00:07.143224    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt with IP's: []
	I0520 04:00:07.264288    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt ...
	I0520 04:00:07.264288    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt: {Name:mkf98561677b3ccb212261e710a2825a6bdb74f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.266055    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key ...
	I0520 04:00:07.266055    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key: {Name:mk594ed759da3a7df8be676ed30b3bcaa23c6905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.267062    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:00:07.267752    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:00:07.267991    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:00:07.268217    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:00:07.268389    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:00:07.268389    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:00:07.268909    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:00:07.277117    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:00:07.277341    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:00:07.278018    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:00:07.278054    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:00:07.278262    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:00:07.278777    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:00:07.278995    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:00:07.279289    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:00:07.281118    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:00:07.326334    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:00:07.373398    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:00:07.419039    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:00:07.460338    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 04:00:07.511330    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:00:07.553788    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:00:07.601851    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:00:07.646361    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:00:07.695010    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:00:07.737464    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:00:07.776501    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:00:07.819432    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:00:07.843212    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:00:07.874148    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.881328    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.898954    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.924334    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:00:07.960638    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:00:07.993144    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.003471    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.015830    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.037363    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:00:08.070929    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:00:08.105068    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.111642    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.126173    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.148720    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:00:08.183227    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:00:08.190326    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:00:08.190408    8140 kubeadm.go:391] StartCluster: {Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:00:08.198993    8140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:00:08.234029    8140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 04:00:08.267001    8140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:00:08.297749    8140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:00:08.317812    8140 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:00:08.317878    8140 kubeadm.go:156] found existing configuration files:
	
	I0520 04:00:08.333498    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 04:00:08.353247    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:00:08.365878    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:00:08.397069    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 04:00:08.415532    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:00:08.428140    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:00:08.463944    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 04:00:08.481906    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:00:08.496825    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:00:08.530869    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 04:00:08.549806    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:00:08.565019    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:00:08.582045    8140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:00:09.037291    8140 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:00:23.832184    8140 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 04:00:23.832184    8140 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:00:23.832184    8140 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:00:23.833726    8140 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:00:23.833917    8140 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:00:23.834144    8140 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:00:23.836933    8140 out.go:204]   - Generating certificates and keys ...
	I0520 04:00:23.837148    8140 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:00:23.837269    8140 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:00:23.837461    8140 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 04:00:23.838194    8140 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-291700 localhost] and IPs [172.25.246.119 127.0.0.1 ::1]
	I0520 04:00:23.838372    8140 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 04:00:23.838664    8140 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-291700 localhost] and IPs [172.25.246.119 127.0.0.1 ::1]
	I0520 04:00:23.838830    8140 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 04:00:23.838964    8140 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 04:00:23.839085    8140 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 04:00:23.839260    8140 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:00:23.839927    8140 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:00:23.840117    8140 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:00:23.840249    8140 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:00:23.842925    8140 out.go:204]   - Booting up control plane ...
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:00:23.844417    8140 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:00:23.844622    8140 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:00:23.844673    8140 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 04:00:23.844673    8140 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 04:00:23.845207    8140 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003291694s
	I0520 04:00:23.845400    8140 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 04:00:23.845400    8140 kubeadm.go:309] [api-check] The API server is healthy after 9.070452796s
	I0520 04:00:23.845764    8140 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:00:23.845764    8140 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:00:23.845764    8140 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:00:23.845764    8140 kubeadm.go:309] [mark-control-plane] Marking the node ha-291700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:00:23.845764    8140 kubeadm.go:309] [bootstrap-token] Using token: xb4118.ouebrb3avn5afcax
	I0520 04:00:23.850647    8140 out.go:204]   - Configuring RBAC rules ...
	I0520 04:00:23.851759    8140 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:00:23.851869    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:00:23.851869    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:00:23.852447    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:00:23.852528    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:00:23.852528    8140 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:00:23.853058    8140 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:00:23.853130    8140 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:00:23.853130    8140 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:00:23.853130    8140 kubeadm.go:309] 
	I0520 04:00:23.853130    8140 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:00:23.853130    8140 kubeadm.go:309] 
	I0520 04:00:23.853687    8140 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:00:23.853687    8140 kubeadm.go:309] 
	I0520 04:00:23.853825    8140 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:00:23.853825    8140 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:00:23.853825    8140 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:00:23.853825    8140 kubeadm.go:309] 
	I0520 04:00:23.853825    8140 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:00:23.854407    8140 kubeadm.go:309] 
	I0520 04:00:23.854491    8140 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:00:23.854491    8140 kubeadm.go:309] 
	I0520 04:00:23.854491    8140 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:00:23.854491    8140 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:00:23.854491    8140 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:00:23.855022    8140 kubeadm.go:309] 
	I0520 04:00:23.855056    8140 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:00:23.855056    8140 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:00:23.855056    8140 kubeadm.go:309] 
	I0520 04:00:23.855056    8140 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xb4118.ouebrb3avn5afcax \
	I0520 04:00:23.855764    8140 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 04:00:23.855764    8140 kubeadm.go:309] 	--control-plane 
	I0520 04:00:23.855764    8140 kubeadm.go:309] 
	I0520 04:00:23.855764    8140 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:00:23.855764    8140 kubeadm.go:309] 
	I0520 04:00:23.855764    8140 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xb4118.ouebrb3avn5afcax \
	I0520 04:00:23.856417    8140 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 04:00:23.856417    8140 cni.go:84] Creating CNI manager for ""
	I0520 04:00:23.856417    8140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:00:23.858696    8140 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 04:00:23.872570    8140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 04:00:23.883857    8140 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 04:00:23.883857    8140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 04:00:23.942208    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 04:00:24.695137    8140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:00:24.710688    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700 minikube.k8s.io/updated_at=2024_05_20T04_00_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=true
	I0520 04:00:24.710688    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:24.727002    8140 ops.go:34] apiserver oom_adj: -16
	I0520 04:00:24.925393    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:25.426637    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:25.930407    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:26.429614    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:26.931808    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:27.432184    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:27.934080    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:28.436973    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:28.937050    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:29.442179    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:29.925817    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:30.430734    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:30.929992    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:31.432975    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:31.928233    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:32.437930    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:32.943134    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:33.439850    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:33.937116    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:34.438993    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:34.924426    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:35.433440    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:35.937339    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:36.071204    8140 kubeadm.go:1107] duration metric: took 11.3760521s to wait for elevateKubeSystemPrivileges
	W0520 04:00:36.071324    8140 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:00:36.071324    8140 kubeadm.go:393] duration metric: took 27.8808785s to StartCluster
	I0520 04:00:36.071324    8140 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:36.071617    8140 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:00:36.073188    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:36.075011    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 04:00:36.075114    8140 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:00:36.075240    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:00:36.075114    8140 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:00:36.075240    8140 addons.go:69] Setting storage-provisioner=true in profile "ha-291700"
	I0520 04:00:36.075394    8140 addons.go:69] Setting default-storageclass=true in profile "ha-291700"
	I0520 04:00:36.075394    8140 addons.go:234] Setting addon storage-provisioner=true in "ha-291700"
	I0520 04:00:36.075517    8140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-291700"
	I0520 04:00:36.075663    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:00:36.075663    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:00:36.076661    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:36.077238    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:36.227472    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 04:00:36.589410    8140 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 04:00:38.468419    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:38.468467    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:38.471189    8140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:00:38.473197    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:38.473197    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:38.473197    8140 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:00:38.473197    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:00:38.474190    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:38.474190    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:00:38.475189    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:00:38.476194    8140 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 04:00:38.476194    8140 addons.go:234] Setting addon default-storageclass=true in "ha-291700"
	I0520 04:00:38.477198    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:00:38.478192    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:40.895902    8140 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:00:40.895902    8140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:43.299917    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:43.300255    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:43.300333    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:00:43.793212    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:00:43.793702    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:43.793854    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:00:43.940931    8140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:00:46.051262    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:00:46.051314    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:46.051314    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:00:46.196808    8140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:00:46.352490    8140 round_trippers.go:463] GET https://172.25.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 04:00:46.352490    8140 round_trippers.go:469] Request Headers:
	I0520 04:00:46.352490    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:00:46.352490    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:00:46.369833    8140 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 04:00:46.370860    8140 round_trippers.go:463] PUT https://172.25.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 04:00:46.370860    8140 round_trippers.go:469] Request Headers:
	I0520 04:00:46.370860    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:00:46.370860    8140 round_trippers.go:473]     Content-Type: application/json
	I0520 04:00:46.370860    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:00:46.374809    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:00:46.378410    8140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 04:00:46.382098    8140 addons.go:505] duration metric: took 10.3059651s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 04:00:46.382098    8140 start.go:245] waiting for cluster config update ...
	I0520 04:00:46.382098    8140 start.go:254] writing updated cluster config ...
	I0520 04:00:46.385158    8140 out.go:177] 
	I0520 04:00:46.394090    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:00:46.394090    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:00:46.402100    8140 out.go:177] * Starting "ha-291700-m02" control-plane node in "ha-291700" cluster
	I0520 04:00:46.404094    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:00:46.404094    8140 cache.go:56] Caching tarball of preloaded images
	I0520 04:00:46.405100    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:00:46.405100    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:00:46.405100    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:00:46.408113    8140 start.go:360] acquireMachinesLock for ha-291700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:00:46.408113    8140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-291700-m02"
	I0520 04:00:46.408113    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:00:46.408113    8140 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 04:00:46.411093    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:00:46.411093    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 04:00:46.411093    8140 client.go:168] LocalClient.Create starting
	I0520 04:00:46.411093    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:00:50.218640    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:00:50.218727    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:50.218803    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:00:51.723699    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:00:51.724168    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:51.724168    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:00:55.451444    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:00:55.451541    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:55.454081    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:00:55.922259    8140 main.go:141] libmachine: Creating SSH key...
	I0520 04:00:56.005523    8140 main.go:141] libmachine: Creating VM...
	I0520 04:00:56.005523    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:00:58.991996    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:00:58.992335    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:58.992404    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:00:58.992465    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:01:00.816037    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:01:00.816037    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:00.816037    8140 main.go:141] libmachine: Creating VHD
	I0520 04:01:00.816895    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:01:04.722880    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D75076A2-73A0-410E-9D70-05A9600AE588
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:01:04.723590    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:04.723590    8140 main.go:141] libmachine: Writing magic tar header
	I0520 04:01:04.723590    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:01:04.737455    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:01:07.983005    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:07.983005    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:07.983621    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd' -SizeBytes 20000MB
	I0520 04:01:10.589860    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:10.589860    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:10.590150    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-291700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700-m02 -DynamicMemoryEnabled $false
	I0520 04:01:16.775152    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:16.775152    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:16.775573    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700-m02 -Count 2
	I0520 04:01:19.101810    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:19.101810    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:19.102891    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\boot2docker.iso'
	I0520 04:01:21.773822    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:21.773822    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:21.774389    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd'
	I0520 04:01:24.598758    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:24.598758    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:24.598758    8140 main.go:141] libmachine: Starting VM...
	I0520 04:01:24.599429    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700-m02
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:27.795590    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:30.209918    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:30.209918    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:30.210853    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:32.907719    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:32.907719    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:33.918033    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:36.296883    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:36.296883    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:36.297015    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:39.010538    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:39.010764    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:40.018298    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:42.354914    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:42.354914    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:42.355423    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:45.014991    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:45.015817    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:46.021977    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:50.987847    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:50.987847    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:51.994420    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:59.253425    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:59.253466    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:59.253560    8140 machine.go:94] provisionDockerMachine start ...
	I0520 04:01:59.253560    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:01.520560    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:01.520560    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:01.521474    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:04.178773    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:04.178773    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:04.185698    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:04.195726    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:04.195726    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:02:04.331212    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:02:04.331212    8140 buildroot.go:166] provisioning hostname "ha-291700-m02"
	I0520 04:02:04.331342    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:09.237358    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:09.238011    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:09.243686    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:09.244360    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:09.244360    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700-m02 && echo "ha-291700-m02" | sudo tee /etc/hostname
	I0520 04:02:09.412095    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700-m02
	
	I0520 04:02:09.412203    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:11.636662    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:11.637663    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:11.637663    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:14.302806    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:14.302806    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:14.310080    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:14.310901    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:14.310901    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:02:14.471496    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:02:14.471496    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 04:02:14.472043    8140 buildroot.go:174] setting up certificates
	I0520 04:02:14.472043    8140 provision.go:84] configureAuth start
	I0520 04:02:14.472136    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:16.693716    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:16.693783    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:16.693840    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:19.355751    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:19.355814    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:19.355814    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:24.228167    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:24.228787    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:24.228787    8140 provision.go:143] copyHostCerts
	I0520 04:02:24.228997    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 04:02:24.229311    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 04:02:24.229311    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 04:02:24.229463    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 04:02:24.230703    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 04:02:24.230998    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 04:02:24.230998    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 04:02:24.230998    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 04:02:24.232316    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 04:02:24.232534    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 04:02:24.232534    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 04:02:24.233037    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 04:02:24.233935    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700-m02 san=[127.0.0.1 172.25.251.208 ha-291700-m02 localhost minikube]
	I0520 04:02:24.392333    8140 provision.go:177] copyRemoteCerts
	I0520 04:02:24.408286    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:02:24.408286    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:26.659100    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:26.659281    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:26.659389    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:29.377658    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:29.377658    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:29.377658    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:02:29.484046    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.075681s)
	I0520 04:02:29.484046    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 04:02:29.484752    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 04:02:29.532060    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 04:02:29.532185    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:02:29.579485    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 04:02:29.580136    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:02:29.625438    8140 provision.go:87] duration metric: took 15.1533724s to configureAuth
	I0520 04:02:29.625498    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:02:29.626045    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:02:29.626140    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:34.536144    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:34.537162    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:34.543649    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:34.544235    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:34.544388    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:02:34.687099    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:02:34.687188    8140 buildroot.go:70] root file system type: tmpfs
	I0520 04:02:34.687386    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:02:34.687466    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:36.937580    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:36.937580    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:36.938331    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:39.610719    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:39.610719    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:39.616566    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:39.617333    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:39.617333    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.246.119"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:02:39.779578    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.246.119
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:02:39.779747    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:41.983059    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:41.983434    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:41.983561    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:44.602661    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:44.602661    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:44.609013    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:44.609809    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:44.609809    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:02:46.759757    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:02:46.759830    8140 machine.go:97] duration metric: took 47.5062001s to provisionDockerMachine
	I0520 04:02:46.759887    8140 client.go:171] duration metric: took 2m0.3486219s to LocalClient.Create
	I0520 04:02:46.759968    8140 start.go:167] duration metric: took 2m0.3487027s to libmachine.API.Create "ha-291700"
	I0520 04:02:46.760024    8140 start.go:293] postStartSetup for "ha-291700-m02" (driver="hyperv")
	I0520 04:02:46.760063    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:02:46.776172    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:02:46.776172    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:51.649439    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:51.649439    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:51.649555    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:02:51.754923    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.978664s)
	I0520 04:02:51.769718    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:02:51.776835    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 04:02:51.776835    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 04:02:51.777650    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 04:02:51.778175    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 04:02:51.778175    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 04:02:51.792174    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:02:51.811370    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 04:02:51.864157    8140 start.go:296] duration metric: took 5.1040861s for postStartSetup
	I0520 04:02:51.867618    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:54.064146    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:54.064146    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:54.064777    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:56.712779    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:56.712779    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:56.712779    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:02:56.716179    8140 start.go:128] duration metric: took 2m10.3078801s to createHost
	I0520 04:02:56.716179    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:58.925340    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:58.925554    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:58.925627    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:01.569119    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:01.569119    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:01.576085    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:03:01.576085    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:03:01.576085    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:03:01.706791    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202981.697667814
	
	I0520 04:03:01.706791    8140 fix.go:216] guest clock: 1716202981.697667814
	I0520 04:03:01.706791    8140 fix.go:229] Guest: 2024-05-20 04:03:01.697667814 -0700 PDT Remote: 2024-05-20 04:02:56.7161798 -0700 PDT m=+345.131732601 (delta=4.981488014s)
	I0520 04:03:01.706791    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:03.885551    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:03.885551    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:03.886746    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:06.520617    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:06.520671    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:06.525944    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:03:06.526698    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:03:06.526698    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716202981
	I0520 04:03:06.670776    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 11:03:01 UTC 2024
	
	I0520 04:03:06.670776    8140 fix.go:236] clock set: Mon May 20 11:03:01 UTC 2024
	 (err=<nil>)
	I0520 04:03:06.670776    8140 start.go:83] releasing machines lock for "ha-291700-m02", held for 2m20.2624615s
	I0520 04:03:06.670776    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:11.550196    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:11.550196    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:11.553206    8140 out.go:177] * Found network options:
	I0520 04:03:11.556114    8140 out.go:177]   - NO_PROXY=172.25.246.119
	W0520 04:03:11.558562    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:03:11.559999    8140 out.go:177]   - NO_PROXY=172.25.246.119
	W0520 04:03:11.562645    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:03:11.563996    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:03:11.566991    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:03:11.566991    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:11.576991    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 04:03:11.576991    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:13.880960    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:13.881098    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:13.881199    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:16.693059    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:16.693059    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:16.693405    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:03:16.719097    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:16.719097    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:16.719758    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:03:16.851569    8140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2745694s)
	I0520 04:03:16.851569    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2845691s)
	W0520 04:03:16.851569    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:03:16.864755    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 04:03:16.898939    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:03:16.899021    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:03:16.899325    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:03:16.946858    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 04:03:16.978301    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:03:16.997587    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:03:17.012140    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:03:17.046017    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:03:17.083278    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:03:17.116264    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:03:17.148565    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:03:17.182636    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:03:17.216492    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:03:17.249667    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:03:17.283054    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:03:17.313077    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:03:17.344656    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:17.549393    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:03:17.583807    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:03:17.599119    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:03:17.635086    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:03:17.669529    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:03:17.714513    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:03:17.751967    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:03:17.788129    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:03:17.851138    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:03:17.875620    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:03:17.922853    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:03:17.941153    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:03:17.959159    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:03:18.003718    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:03:18.212248    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:03:18.407987    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:03:18.407987    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:03:18.464695    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:18.669456    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:03:21.216395    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5469341s)
	I0520 04:03:21.230270    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:03:21.271662    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:03:21.307537    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:03:21.507281    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:03:21.716883    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:21.911393    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:03:21.957509    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:03:21.996695    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:22.194239    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:03:22.309128    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:03:22.322016    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:03:22.330698    8140 start.go:562] Will wait 60s for crictl version
	I0520 04:03:22.342433    8140 ssh_runner.go:195] Run: which crictl
	I0520 04:03:22.361547    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:03:22.424880    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 04:03:22.438711    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:03:22.488929    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:03:22.522926    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 04:03:22.526610    8140 out.go:177]   - env NO_PROXY=172.25.246.119
	I0520 04:03:22.528781    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 04:03:22.536005    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 04:03:22.536005    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 04:03:22.550115    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 04:03:22.556089    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:03:22.577499    8140 mustload.go:65] Loading cluster: ha-291700
	I0520 04:03:22.577499    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:03:22.578644    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:24.818680    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:24.818680    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:24.818680    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:03:24.819327    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.251.208
	I0520 04:03:24.819327    8140 certs.go:194] generating shared ca certs ...
	I0520 04:03:24.819327    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:24.820044    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:03:24.820231    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:03:24.820231    8140 certs.go:256] generating profile certs ...
	I0520 04:03:24.821033    8140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:03:24.821816    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5
	I0520 04:03:24.821816    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.251.208 172.25.255.254]
	I0520 04:03:25.170504    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 ...
	I0520 04:03:25.171502    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5: {Name:mk85482bea0486d2a9770aad77782ccb41e9e5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:25.172453    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5 ...
	I0520 04:03:25.172453    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5: {Name:mkf153b9fb4974203d1d3ed68ef74d40bc1c5df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:25.173162    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:03:25.187234    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:03:25.188233    8140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:03:25.188233    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:03:25.188803    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:03:25.189091    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:03:25.189239    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:03:25.189418    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:03:25.189611    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:03:25.189821    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:03:25.189821    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:03:25.190229    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:03:25.192255    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:27.487624    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:27.487729    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:27.487839    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:30.216359    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:03:30.216450    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:30.216634    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:03:30.319215    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 04:03:30.331917    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 04:03:30.371914    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 04:03:30.379303    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 04:03:30.418096    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 04:03:30.426241    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 04:03:30.467470    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 04:03:30.475146    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 04:03:30.512432    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 04:03:30.520853    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 04:03:30.555306    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 04:03:30.562074    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 04:03:30.583314    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:03:30.653138    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:03:30.718123    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:03:30.764102    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:03:30.807772    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 04:03:30.854255    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:03:30.901978    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:03:30.953423    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:03:30.999512    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:03:31.044040    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:03:31.102591    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:03:31.145037    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 04:03:31.183744    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 04:03:31.216348    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 04:03:31.251884    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 04:03:31.287349    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 04:03:31.319240    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 04:03:31.348796    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 04:03:31.400708    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:03:31.421599    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:03:31.452365    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.461518    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.474429    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.494737    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:03:31.533727    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:03:31.566072    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.572921    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.584218    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.605374    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:03:31.639396    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:03:31.669630    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.676540    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.691161    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.712884    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:03:31.745913    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:03:31.752893    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:03:31.753213    8140 kubeadm.go:928] updating node {m02 172.25.251.208 8443 v1.30.1 docker true true} ...
	I0520 04:03:31.753489    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.251.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:03:31.753515    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:03:31.766356    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:03:31.791451    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:03:31.792480    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:03:31.805801    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:03:31.821924    8140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 04:03:31.835822    8140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 04:03:31.860122    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0520 04:03:31.860191    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0520 04:03:31.860191    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0520 04:03:32.985976    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:03:32.998032    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:03:33.005502    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 04:03:33.005502    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 04:03:33.158163    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:03:33.171657    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:03:33.218379    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 04:03:33.218379    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 04:03:33.449943    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:03:33.507238    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:03:33.520318    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:03:33.553855    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 04:03:33.553855    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 04:03:34.234455    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 04:03:34.253562    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0520 04:03:34.298521    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:03:34.330740    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 04:03:34.382401    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:03:34.389577    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:03:34.428743    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:34.632047    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:03:34.666189    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:03:34.666970    8140 start.go:316] joinCluster: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:03:34.667024    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 04:03:34.667024    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:36.871474    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:36.871474    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:36.871989    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:39.515806    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:03:39.515883    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:39.516037    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:03:39.706353    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0392501s)
	I0520 04:03:39.706474    8140 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:03:39.707980    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o1rrbw.w0v2ukl5tfk9vfwn --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m02 --control-plane --apiserver-advertise-address=172.25.251.208 --apiserver-bind-port=8443"
	I0520 04:04:23.481489    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o1rrbw.w0v2ukl5tfk9vfwn --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m02 --control-plane --apiserver-advertise-address=172.25.251.208 --apiserver-bind-port=8443": (43.7733839s)
	I0520 04:04:23.481489    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 04:04:24.315645    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700-m02 minikube.k8s.io/updated_at=2024_05_20T04_04_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=false
	I0520 04:04:24.509390    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-291700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 04:04:24.676390    8140 start.go:318] duration metric: took 50.00934s to joinCluster
	I0520 04:04:24.676638    8140 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:04:24.682745    8140 out.go:177] * Verifying Kubernetes components...
	I0520 04:04:24.677900    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:04:24.697768    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:04:25.062692    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:04:25.089727    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:04:25.090275    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 04:04:25.090275    8140 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.255.254:8443 with https://172.25.246.119:8443
	I0520 04:04:25.091284    8140 node_ready.go:35] waiting up to 6m0s for node "ha-291700-m02" to be "Ready" ...
	I0520 04:04:25.091284    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:25.091284    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:25.091284    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:25.091284    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:25.105435    8140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0520 04:04:25.601447    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:25.601447    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:25.601447    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:25.601447    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:25.608056    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:26.106471    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:26.106543    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:26.106543    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:26.106543    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:26.112279    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:26.594514    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:26.594594    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:26.594594    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:26.594594    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:26.602860    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:27.101643    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:27.101643    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:27.101643    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:27.101643    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:27.108278    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:27.109928    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:27.594572    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:27.594654    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:27.594746    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:27.594746    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:27.600627    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:28.102604    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:28.102604    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:28.102604    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:28.102604    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:28.119566    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:04:28.597285    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:28.597285    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:28.597285    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:28.597285    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:28.603041    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:29.094635    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:29.094635    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:29.094758    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:29.094758    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:29.100345    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:29.601569    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:29.601569    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:29.601764    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:29.601764    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:29.609936    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:29.609936    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:30.093110    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:30.093110    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:30.093172    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:30.093172    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:30.097670    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:30.605493    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:30.605493    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:30.605699    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:30.605699    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:30.859146    8140 round_trippers.go:574] Response Status: 200 OK in 253 milliseconds
	I0520 04:04:31.105400    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:31.105400    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:31.105400    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:31.105400    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:31.150966    8140 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0520 04:04:31.598375    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:31.598684    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:31.598684    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:31.598684    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:31.604392    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:32.102158    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:32.102433    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:32.102433    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:32.102433    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:32.107801    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:32.109140    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:32.605536    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:32.605536    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:32.605536    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:32.605536    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:32.610630    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:33.099028    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:33.099028    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:33.099028    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:33.099028    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:33.105439    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:33.600063    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:33.600209    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:33.600209    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:33.600209    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:33.607139    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:34.101635    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:34.101635    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:34.101635    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:34.101635    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:34.108200    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:34.602487    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:34.602699    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:34.602699    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:34.602699    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:34.608525    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:34.609523    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:35.092398    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:35.092478    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:35.092478    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:35.092478    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:35.098264    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:35.602514    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:35.602703    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:35.602703    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:35.602703    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:35.610464    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:36.096793    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:36.096978    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:36.096978    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:36.096978    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:36.102216    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:36.595072    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:36.595116    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:36.595116    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:36.595116    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:36.607895    8140 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 04:04:36.609605    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:37.104930    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.104930    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.105195    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.105195    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.110498    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:37.595116    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.595230    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.595230    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.595359    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.599730    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.601185    8140 node_ready.go:49] node "ha-291700-m02" has status "Ready":"True"
	I0520 04:04:37.601243    8140 node_ready.go:38] duration metric: took 12.509881s for node "ha-291700-m02" to be "Ready" ...
	I0520 04:04:37.601243    8140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:04:37.601372    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:37.601442    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.601442    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.601466    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.611237    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:04:37.620211    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.621213    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hczp
	I0520 04:04:37.621213    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.621213    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.621213    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.627227    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:37.628245    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.628245    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.628245    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.628245    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.632261    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.632261    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.633227    8140 pod_ready.go:81] duration metric: took 13.0159ms for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.633227    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.633227    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gglsg
	I0520 04:04:37.633227    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.633227    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.633227    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.637226    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:37.638427    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.638639    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.638639    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.638666    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.644902    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:37.645692    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.645692    8140 pod_ready.go:81] duration metric: took 12.465ms for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.645692    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.645692    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700
	I0520 04:04:37.645692    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.645692    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.645692    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.649507    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:37.650505    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.650505    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.650505    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.650505    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.654505    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.654505    8140 pod_ready.go:92] pod "etcd-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.654505    8140 pod_ready.go:81] duration metric: took 8.8135ms for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.654505    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.654505    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m02
	I0520 04:04:37.654505    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.654505    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.654505    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.659509    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:37.660510    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.660510    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.660510    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.660510    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.664550    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.665207    8140 pod_ready.go:92] pod "etcd-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.665207    8140 pod_ready.go:81] duration metric: took 10.7018ms for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.665267    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.800661    8140 request.go:629] Waited for 135.3935ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:04:37.800661    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:04:37.800661    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.800661    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.800661    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.806562    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:38.005031    8140 request.go:629] Waited for 197.2291ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.005232    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.005232    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.005232    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.005232    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.011805    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:38.012751    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.012751    8140 pod_ready.go:81] duration metric: took 347.4829ms for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.012751    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.209956    8140 request.go:629] Waited for 196.8945ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:04:38.209956    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:04:38.209956    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.209956    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.209956    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.216638    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:38.399243    8140 request.go:629] Waited for 180.2564ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:38.399440    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:38.399440    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.399440    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.399440    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.404038    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:38.405368    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.405422    8140 pod_ready.go:81] duration metric: took 392.6708ms for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.405475    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.601879    8140 request.go:629] Waited for 196.3366ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:04:38.602173    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:04:38.602173    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.602173    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.602173    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.609750    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:38.808623    8140 request.go:629] Waited for 197.9781ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.808894    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.808969    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.808969    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.809019    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.813338    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:38.814321    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.814321    8140 pod_ready.go:81] duration metric: took 408.8456ms for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.814321    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.997286    8140 request.go:629] Waited for 181.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:04:38.997351    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:04:38.997351    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.997351    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.997351    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.001934    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:39.201087    8140 request.go:629] Waited for 196.8246ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.201308    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.201308    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.201308    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.201308    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.204916    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:39.205848    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:39.205848    8140 pod_ready.go:81] duration metric: took 391.5269ms for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.205848    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.402173    8140 request.go:629] Waited for 195.354ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:04:39.402376    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:04:39.402376    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.402376    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.402376    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.412330    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:04:39.606195    8140 request.go:629] Waited for 192.7183ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.606277    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.606277    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.606277    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.606277    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.611919    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:39.613076    8140 pod_ready.go:92] pod "kube-proxy-94csf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:39.613076    8140 pod_ready.go:81] duration metric: took 407.2264ms for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.613076    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.808628    8140 request.go:629] Waited for 195.5524ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:04:39.808865    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:04:39.808865    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.808967    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.808993    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.817046    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:40.011007    8140 request.go:629] Waited for 192.7305ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.011244    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.011244    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.011244    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.011244    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.016066    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.017502    8140 pod_ready.go:92] pod "kube-proxy-xq4tv" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.017673    8140 pod_ready.go:81] duration metric: took 404.5964ms for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.017673    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.197148    8140 request.go:629] Waited for 179.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:04:40.197339    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:04:40.197433    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.197433    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.197433    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.202745    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.399873    8140 request.go:629] Waited for 196.6544ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.399971    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.400116    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.400116    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.400116    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.404941    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.405734    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.405734    8140 pod_ready.go:81] duration metric: took 388.0611ms for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.405734    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.602482    8140 request.go:629] Waited for 196.7478ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:04:40.602901    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:04:40.602901    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.602901    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.602901    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.608947    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:40.807185    8140 request.go:629] Waited for 197.0206ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:40.807185    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:40.807185    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.807185    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.807185    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.812862    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:40.814574    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.814574    8140 pod_ready.go:81] duration metric: took 408.839ms for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.814574    8140 pod_ready.go:38] duration metric: took 3.213269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:04:40.814636    8140 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:04:40.826469    8140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:04:40.857187    8140 api_server.go:72] duration metric: took 16.1804815s to wait for apiserver process to appear ...
	I0520 04:04:40.857187    8140 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:04:40.857187    8140 api_server.go:253] Checking apiserver healthz at https://172.25.246.119:8443/healthz ...
	I0520 04:04:40.866236    8140 api_server.go:279] https://172.25.246.119:8443/healthz returned 200:
	ok
	I0520 04:04:40.866751    8140 round_trippers.go:463] GET https://172.25.246.119:8443/version
	I0520 04:04:40.866847    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.866890    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.866890    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.868009    8140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 04:04:40.868971    8140 api_server.go:141] control plane version: v1.30.1
	I0520 04:04:40.869032    8140 api_server.go:131] duration metric: took 11.845ms to wait for apiserver health ...
	I0520 04:04:40.869032    8140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 04:04:41.011085    8140 request.go:629] Waited for 141.8285ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.011168    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.011168    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.011274    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.011274    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.019389    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:41.026583    8140 system_pods.go:59] 17 kube-system pods found
	I0520 04:04:41.026583    8140 system_pods.go:61] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:04:41.026715    8140 system_pods.go:74] duration metric: took 157.6823ms to wait for pod list to return data ...
	I0520 04:04:41.026715    8140 default_sa.go:34] waiting for default service account to be created ...
	I0520 04:04:41.200270    8140 request.go:629] Waited for 173.3681ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:04:41.200403    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:04:41.200403    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.200403    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.200403    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.206311    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:41.206792    8140 default_sa.go:45] found service account: "default"
	I0520 04:04:41.206792    8140 default_sa.go:55] duration metric: took 180.0774ms for default service account to be created ...
	I0520 04:04:41.206850    8140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 04:04:41.403835    8140 request.go:629] Waited for 196.6805ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.403939    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.403939    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.403939    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.403939    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.411570    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:41.419192    8140 system_pods.go:86] 17 kube-system pods found
	I0520 04:04:41.419268    8140 system_pods.go:89] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:04:41.419556    8140 system_pods.go:126] duration metric: took 212.6087ms to wait for k8s-apps to be running ...
	I0520 04:04:41.419556    8140 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 04:04:41.430689    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:04:41.457656    8140 system_svc.go:56] duration metric: took 38.0998ms WaitForService to wait for kubelet
	I0520 04:04:41.457656    8140 kubeadm.go:576] duration metric: took 16.7809492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:04:41.457656    8140 node_conditions.go:102] verifying NodePressure condition ...
	I0520 04:04:41.605339    8140 request.go:629] Waited for 147.4792ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes
	I0520 04:04:41.605446    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes
	I0520 04:04:41.605553    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.605617    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.605617    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.613028    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:41.614391    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:04:41.614478    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:04:41.614478    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:04:41.614478    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:04:41.614478    8140 node_conditions.go:105] duration metric: took 156.8215ms to run NodePressure ...
	I0520 04:04:41.614478    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:04:41.614553    8140 start.go:254] writing updated cluster config ...
	I0520 04:04:41.618050    8140 out.go:177] 
	I0520 04:04:41.630301    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:04:41.630301    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:04:41.636963    8140 out.go:177] * Starting "ha-291700-m03" control-plane node in "ha-291700" cluster
	I0520 04:04:41.639281    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:04:41.639472    8140 cache.go:56] Caching tarball of preloaded images
	I0520 04:04:41.639472    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:04:41.640004    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:04:41.640264    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:04:41.645064    8140 start.go:360] acquireMachinesLock for ha-291700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:04:41.645349    8140 start.go:364] duration metric: took 285.7µs to acquireMachinesLock for "ha-291700-m03"
	I0520 04:04:41.645535    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:04:41.645569    8140 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0520 04:04:41.647884    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:04:41.647884    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 04:04:41.648677    8140 client.go:168] LocalClient.Create starting
	I0520 04:04:41.648824    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:04:41.648824    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:04:41.649399    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:04:41.649532    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:04:41.649760    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:04:41.649760    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:04:41.649960    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:04:43.635145    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:04:43.636170    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:43.636257    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:04:45.459649    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:04:45.459649    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:45.459744    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:04:47.035628    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:04:47.035628    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:47.036308    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:04:51.020446    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:04:51.021379    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:51.023579    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:04:51.448241    8140 main.go:141] libmachine: Creating SSH key...
	I0520 04:04:51.599957    8140 main.go:141] libmachine: Creating VM...
	I0520 04:04:51.599957    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:04:54.783181    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:04:54.784132    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:54.784298    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:04:54.784298    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:04:56.647658    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:04:56.648359    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:56.648359    8140 main.go:141] libmachine: Creating VHD
	I0520 04:04:56.648359    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:05:00.602064    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D69B95F6-287C-4368-9338-09435B916E07
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:05:00.602064    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:00.602064    8140 main.go:141] libmachine: Writing magic tar header
	I0520 04:05:00.602064    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:05:00.614017    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:05:03.937827    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:03.937827    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:03.938251    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd' -SizeBytes 20000MB
	I0520 04:05:06.602045    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:06.602946    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:06.603031    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-291700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700-m03 -DynamicMemoryEnabled $false
	I0520 04:05:12.867816    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:12.867816    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:12.867965    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700-m03 -Count 2
	I0520 04:05:15.197610    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:15.197610    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:15.197749    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\boot2docker.iso'
	I0520 04:05:17.984346    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:17.984346    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:17.984900    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd'
	I0520 04:05:20.873036    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:20.873441    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:20.873480    8140 main.go:141] libmachine: Starting VM...
	I0520 04:05:20.873556    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700-m03
	I0520 04:05:24.107797    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:24.108487    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:24.108487    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 04:05:24.108633    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:29.349788    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:29.349788    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:30.365589    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:32.755998    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:32.756578    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:32.756578    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:35.447089    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:35.447089    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:36.452879    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:41.481105    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:41.481914    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:42.494860    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:47.541923    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:47.541982    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:48.544515    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:50.888132    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:50.888347    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:50.888347    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:53.613931    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:05:53.614795    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:53.614954    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:55.870785    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:55.871581    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:55.871808    8140 machine.go:94] provisionDockerMachine start ...
	I0520 04:05:55.872049    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:58.142888    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:58.142888    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:58.143026    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:00.870554    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:00.870554    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:00.877622    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:00.878359    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:00.878359    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:06:01.017692    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:06:01.017755    8140 buildroot.go:166] provisioning hostname "ha-291700-m03"
	I0520 04:06:01.017820    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:05.949430    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:05.949826    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:05.960186    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:05.961216    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:05.961216    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700-m03 && echo "ha-291700-m03" | sudo tee /etc/hostname
	I0520 04:06:06.138083    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700-m03
	
	I0520 04:06:06.138192    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:08.420630    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:08.420630    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:08.421453    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:11.098063    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:11.098063    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:11.103377    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:11.104090    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:11.104090    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:06:11.273244    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:06:11.273802    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 04:06:11.273802    8140 buildroot.go:174] setting up certificates
	I0520 04:06:11.273802    8140 provision.go:84] configureAuth start
	I0520 04:06:11.273933    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:13.539411    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:13.539411    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:13.539472    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:16.215406    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:16.215406    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:16.215474    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:21.191760    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:21.192122    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:21.192122    8140 provision.go:143] copyHostCerts
	I0520 04:06:21.192205    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 04:06:21.192730    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 04:06:21.192730    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 04:06:21.193339    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 04:06:21.194098    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 04:06:21.194975    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 04:06:21.195037    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 04:06:21.195585    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 04:06:21.196429    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 04:06:21.196429    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 04:06:21.196429    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 04:06:21.197242    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 04:06:21.198033    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700-m03 san=[127.0.0.1 172.25.246.110 ha-291700-m03 localhost minikube]
	I0520 04:06:21.782734    8140 provision.go:177] copyRemoteCerts
	I0520 04:06:21.795820    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:06:21.796899    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:24.060032    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:24.060032    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:24.060151    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:26.729004    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:26.729004    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:26.729486    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:06:26.843778    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0479499s)
	I0520 04:06:26.843778    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 04:06:26.844314    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 04:06:26.892348    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 04:06:26.893036    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:06:26.954915    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 04:06:26.955574    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:06:27.004647    8140 provision.go:87] duration metric: took 15.7308197s to configureAuth
	I0520 04:06:27.004695    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:06:27.005184    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:06:27.005184    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:31.916021    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:31.916021    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:31.923057    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:31.923600    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:31.923704    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:06:32.067687    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:06:32.067771    8140 buildroot.go:70] root file system type: tmpfs
	I0520 04:06:32.067897    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:06:32.068043    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:34.327459    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:34.327459    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:34.328088    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:37.023339    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:37.023339    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:37.030458    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:37.030458    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:37.030458    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.246.119"
	Environment="NO_PROXY=172.25.246.119,172.25.251.208"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:06:37.208095    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.246.119
	Environment=NO_PROXY=172.25.246.119,172.25.251.208
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:06:37.208170    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:39.458227    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:39.458415    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:39.458415    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:42.145132    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:42.145132    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:42.151659    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:42.151659    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:42.151659    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:06:44.386267    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:06:44.386267    8140 machine.go:97] duration metric: took 48.514382s to provisionDockerMachine
	I0520 04:06:44.386267    8140 client.go:171] duration metric: took 2m2.7373937s to LocalClient.Create
	I0520 04:06:44.386267    8140 start.go:167] duration metric: took 2m2.7381864s to libmachine.API.Create "ha-291700"
	I0520 04:06:44.386267    8140 start.go:293] postStartSetup for "ha-291700-m03" (driver="hyperv")
	I0520 04:06:44.386267    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:06:44.403499    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:06:44.403499    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:46.662596    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:46.662596    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:46.663368    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:49.351795    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:49.351974    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:49.352313    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:06:49.473961    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0704535s)
	I0520 04:06:49.487231    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:06:49.494968    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 04:06:49.494968    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 04:06:49.495310    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 04:06:49.496400    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 04:06:49.496485    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 04:06:49.515434    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:06:49.537022    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 04:06:49.587024    8140 start.go:296] duration metric: took 5.2007488s for postStartSetup
	I0520 04:06:49.589751    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:54.516793    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:54.516793    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:54.517181    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:06:54.519719    8140 start.go:128] duration metric: took 2m12.8739365s to createHost
	I0520 04:06:54.519719    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:56.778876    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:56.778876    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:56.779108    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:59.467237    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:59.467237    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:59.478959    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:59.480025    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:59.480025    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:06:59.625534    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716203219.629909686
	
	I0520 04:06:59.625655    8140 fix.go:216] guest clock: 1716203219.629909686
	I0520 04:06:59.625655    8140 fix.go:229] Guest: 2024-05-20 04:06:59.629909686 -0700 PDT Remote: 2024-05-20 04:06:54.519719 -0700 PDT m=+582.934892001 (delta=5.110190686s)
	I0520 04:06:59.625751    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:01.925040    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:01.925040    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:01.925936    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:04.600576    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:04.601497    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:04.608186    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:07:04.608186    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:07:04.608735    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716203219
	I0520 04:07:04.763853    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 11:06:59 UTC 2024
	
	I0520 04:07:04.763937    8140 fix.go:236] clock set: Mon May 20 11:06:59 UTC 2024
	 (err=<nil>)
	I0520 04:07:04.763937    8140 start.go:83] releasing machines lock for "ha-291700-m03", held for 2m23.1183587s
	I0520 04:07:04.765190    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:06.997206    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:06.997302    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:06.997359    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:09.687310    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:09.687310    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:09.693394    8140 out.go:177] * Found network options:
	I0520 04:07:09.696472    8140 out.go:177]   - NO_PROXY=172.25.246.119,172.25.251.208
	W0520 04:07:09.698761    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.698761    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:07:09.700661    8140 out.go:177]   - NO_PROXY=172.25.246.119,172.25.251.208
	W0520 04:07:09.703505    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.703505    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.705510    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.705510    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:07:09.707515    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:07:09.707515    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:09.716395    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 04:07:09.716395    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:14.867371    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:14.867442    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:14.867587    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:07:14.896464    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:14.896542    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:14.896814    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:07:14.973713    8140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2573098s)
	W0520 04:07:14.973713    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:07:14.987818    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 04:07:15.111594    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4039692s)
	I0520 04:07:15.111594    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:07:15.111688    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:07:15.111924    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:07:15.161634    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 04:07:15.199321    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:07:15.220029    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:07:15.233400    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:07:15.269617    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:07:15.306731    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:07:15.340920    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:07:15.375146    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:07:15.414974    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:07:15.449603    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:07:15.485061    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:07:15.518070    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:07:15.551366    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:07:15.584237    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:15.799409    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:07:15.837886    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:07:15.850741    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:07:15.889807    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:07:15.929703    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:07:15.988696    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:07:16.027206    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:07:16.071417    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:07:16.138587    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:07:16.165725    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:07:16.219401    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:07:16.238640    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:07:16.257035    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:07:16.302431    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:07:16.528082    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:07:16.716621    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:07:16.716738    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:07:16.770200    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:16.980015    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:07:19.515064    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5339643s)
	I0520 04:07:19.527630    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:07:19.564655    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:07:19.602943    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:07:19.801164    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:07:20.004012    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:20.209540    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:07:20.252228    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:07:20.290282    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:20.503636    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:07:20.628805    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:07:20.642694    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:07:20.651979    8140 start.go:562] Will wait 60s for crictl version
	I0520 04:07:20.665194    8140 ssh_runner.go:195] Run: which crictl
	I0520 04:07:20.687145    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:07:20.748980    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 04:07:20.759673    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:07:20.804966    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:07:20.842972    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 04:07:20.845198    8140 out.go:177]   - env NO_PROXY=172.25.246.119
	I0520 04:07:20.850008    8140 out.go:177]   - env NO_PROXY=172.25.246.119,172.25.251.208
	I0520 04:07:20.852925    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 04:07:20.861317    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 04:07:20.861317    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 04:07:20.877430    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 04:07:20.884804    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:07:20.908564    8140 mustload.go:65] Loading cluster: ha-291700
	I0520 04:07:20.909514    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:07:20.910237    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:23.163287    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:23.163287    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:23.163287    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:07:23.163897    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.246.110
	I0520 04:07:23.163897    8140 certs.go:194] generating shared ca certs ...
	I0520 04:07:23.163897    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.164894    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:07:23.165256    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:07:23.165422    8140 certs.go:256] generating profile certs ...
	I0520 04:07:23.166103    8140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:07:23.166103    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3
	I0520 04:07:23.166103    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.251.208 172.25.246.110 172.25.255.254]
	I0520 04:07:23.462542    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 ...
	I0520 04:07:23.462542    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3: {Name:mke17989d921d57f7069f27df1aaa6c3fa0167c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.464635    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3 ...
	I0520 04:07:23.464635    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3: {Name:mk2883fbcbcb35c3737a67461e3ce0ec6404974d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.465082    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:07:23.479090    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:07:23.481094    8140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:07:23.482194    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:07:23.482498    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:07:23.483110    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:07:23.483110    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:07:23.483110    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:07:23.483801    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:07:23.484266    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:07:23.484659    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:07:23.485212    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:07:23.485212    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:23.485212    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:07:23.485747    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:07:23.486012    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:25.785961    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:25.786152    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:25.786152    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:28.544673    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:07:28.545646    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:28.545843    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:07:28.655567    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 04:07:28.664274    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 04:07:28.696903    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 04:07:28.704165    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 04:07:28.744512    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 04:07:28.750583    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 04:07:28.786064    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 04:07:28.792648    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 04:07:28.830866    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 04:07:28.838643    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 04:07:28.873050    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 04:07:28.883462    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 04:07:28.904035    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:07:28.954088    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:07:29.005677    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:07:29.053434    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:07:29.117189    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 04:07:29.168983    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:07:29.218539    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:07:29.262883    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:07:29.309234    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:07:29.364734    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:07:29.417129    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:07:29.466025    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 04:07:29.501015    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 04:07:29.534436    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 04:07:29.568618    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 04:07:29.602186    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 04:07:29.635521    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 04:07:29.668296    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 04:07:29.714683    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:07:29.736283    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:07:29.769442    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.776239    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.789027    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.810991    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:07:29.844774    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:07:29.878643    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.887557    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.901917    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.925508    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:07:29.963567    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:07:29.997787    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.008905    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.021933    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.043971    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:07:30.090670    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:07:30.097207    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:07:30.098418    8140 kubeadm.go:928] updating node {m03 172.25.246.110 8443 v1.30.1 docker true true} ...
	I0520 04:07:30.098418    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.246.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:07:30.098418    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:07:30.112363    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:07:30.137495    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:07:30.138230    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:07:30.152347    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:07:30.174290    8140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 04:07:30.188191    8140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 04:07:30.208249    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 04:07:30.208396    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 04:07:30.208396    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 04:07:30.208547    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:07:30.208547    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:07:30.236307    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 04:07:30.236557    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 04:07:30.285879    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:07:30.285879    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 04:07:30.286184    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 04:07:30.300086    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:07:30.329907    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 04:07:30.330335    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 04:07:31.555903    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 04:07:31.574872    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0520 04:07:31.608357    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:07:31.642522    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 04:07:31.693690    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:07:31.722153    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:07:31.760523    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:31.969876    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:07:32.000328    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:07:32.000328    8140 start.go:316] joinCluster: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:07:32.001416    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 04:07:32.001416    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:34.256107    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:34.256163    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:34.256163    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:36.963330    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:07:36.963330    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:36.963733    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:07:37.177852    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1764281s)
	I0520 04:07:37.177852    8140 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:07:37.177852    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu63e6.9wbxciunnwhkook6 --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m03 --control-plane --apiserver-advertise-address=172.25.246.110 --apiserver-bind-port=8443"
	I0520 04:08:21.357924    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu63e6.9wbxciunnwhkook6 --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m03 --control-plane --apiserver-advertise-address=172.25.246.110 --apiserver-bind-port=8443": (44.1800015s)
	I0520 04:08:21.357924    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 04:08:22.165043    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700-m03 minikube.k8s.io/updated_at=2024_05_20T04_08_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=false
	I0520 04:08:22.333910    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-291700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 04:08:22.492314    8140 start.go:318] duration metric: took 50.4919052s to joinCluster
	I0520 04:08:22.493294    8140 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:08:22.496287    8140 out.go:177] * Verifying Kubernetes components...
	I0520 04:08:22.493294    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:08:22.514586    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:08:22.929437    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:08:22.970508    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:08:22.971262    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 04:08:22.971386    8140 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.255.254:8443 with https://172.25.246.119:8443
	I0520 04:08:22.971967    8140 node_ready.go:35] waiting up to 6m0s for node "ha-291700-m03" to be "Ready" ...
	I0520 04:08:22.972372    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:22.972372    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:22.972372    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:22.972372    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:22.991537    8140 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0520 04:08:23.483443    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:23.483443    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:23.483443    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:23.483443    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:23.489131    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:23.975185    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:23.975248    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:23.975310    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:23.975310    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:23.980629    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:24.473704    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:24.473856    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:24.473856    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:24.473856    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:24.477477    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:24.986735    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:24.986735    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:24.986735    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:24.986826    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.013583    8140 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0520 04:08:25.015227    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:25.478831    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:25.478831    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:25.478831    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:25.478831    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.484414    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:25.982834    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:25.982834    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:25.982834    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:25.982834    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.987416    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:26.474666    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:26.474726    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:26.474782    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:26.474782    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:26.479482    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:26.984489    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:26.984489    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:26.984560    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:26.984560    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:26.988881    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:27.487357    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:27.487407    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:27.487407    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:27.487407    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:27.492551    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:27.493328    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:27.977344    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:27.977521    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:27.977521    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:27.977521    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:27.983501    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:28.486359    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:28.486569    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:28.486569    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:28.486569    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:28.490892    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:28.981851    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:28.981851    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:28.981851    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:28.981851    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.151834    8140 round_trippers.go:574] Response Status: 200 OK in 169 milliseconds
	I0520 04:08:29.487263    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:29.487263    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:29.487263    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.487263    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:29.493489    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:29.494320    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:29.975530    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:29.975530    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:29.975530    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.975530    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:29.992116    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:08:30.474490    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:30.474490    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:30.474616    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:30.474616    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:30.478916    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:30.978736    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:30.978930    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:30.978930    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:30.978930    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:30.983978    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:31.482442    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:31.482547    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:31.482547    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:31.482547    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:31.497518    8140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0520 04:08:31.499058    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:31.983321    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:31.983410    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:31.983410    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:31.983486    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:31.990824    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:32.482716    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:32.482835    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:32.482891    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:32.482891    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:32.488464    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:32.984907    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:32.985030    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:32.985030    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:32.985030    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:32.989461    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.485429    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:33.485429    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:33.485429    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:33.485647    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:33.490353    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.975187    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:33.975273    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:33.975333    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:33.975333    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:33.980195    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.980400    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:34.478909    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:34.478909    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:34.478909    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:34.478909    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:34.485041    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:34.981699    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:34.982004    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:34.982004    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:34.982004    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:34.986814    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:35.481613    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:35.481894    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.481976    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.481976    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:35.487705    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:35.984343    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:35.984343    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.984343    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.984343    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:35.990172    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:35.990822    8140 node_ready.go:49] node "ha-291700-m03" has status "Ready":"True"
	I0520 04:08:35.990822    8140 node_ready.go:38] duration metric: took 13.0185341s for node "ha-291700-m03" to be "Ready" ...
	I0520 04:08:35.990900    8140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:08:35.991021    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:35.991021    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.991021    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.991074    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.011325    8140 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0520 04:08:36.022495    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.023055    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hczp
	I0520 04:08:36.023130    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.023130    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.023130    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.028430    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.029717    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.029717    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.029717    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.029717    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.034371    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.035827    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.035878    8140 pod_ready.go:81] duration metric: took 13.383ms for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.035878    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.036015    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gglsg
	I0520 04:08:36.036015    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.036015    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.036015    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.040341    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:36.041280    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.041328    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.041328    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.041328    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.045649    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.045804    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.045804    8140 pod_ready.go:81] duration metric: took 9.926ms for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.045804    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.045804    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700
	I0520 04:08:36.045804    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.045804    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.045804    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.050919    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.052536    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.052634    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.052634    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.052634    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.060722    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:08:36.061874    8140 pod_ready.go:92] pod "etcd-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.061874    8140 pod_ready.go:81] duration metric: took 16.0703ms for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.061874    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.061874    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m02
	I0520 04:08:36.061874    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.061874    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.062428    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.065510    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:36.067708    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:36.067708    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.067708    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.067708    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.071807    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.072443    8140 pod_ready.go:92] pod "etcd-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.072501    8140 pod_ready.go:81] duration metric: took 10.6271ms for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.072501    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.188196    8140 request.go:629] Waited for 115.4247ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m03
	I0520 04:08:36.188414    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m03
	I0520 04:08:36.188466    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.188466    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.188466    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.192939    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.394412    8140 request.go:629] Waited for 199.4678ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:36.394412    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:36.394412    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.394412    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.394412    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.402137    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:36.405608    8140 pod_ready.go:92] pod "etcd-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.405652    8140 pod_ready.go:81] duration metric: took 333.1504ms for pod "etcd-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.405707    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.587003    8140 request.go:629] Waited for 180.8893ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:08:36.587136    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:08:36.587136    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.587136    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.587258    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.598587    8140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 04:08:36.789108    8140 request.go:629] Waited for 188.4749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.789318    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.789318    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.789318    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.789318    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.794630    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.795876    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.795943    8140 pod_ready.go:81] duration metric: took 390.235ms for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.795943    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.993081    8140 request.go:629] Waited for 196.8581ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:08:36.993081    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:08:36.993380    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.993380    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.993501    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.999279    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.194590    8140 request.go:629] Waited for 194.9954ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:37.194859    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:37.194859    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.194859    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.194859    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.201462    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:37.202652    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.202815    8140 pod_ready.go:81] duration metric: took 406.8113ms for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.202904    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.397048    8140 request.go:629] Waited for 194.0153ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m03
	I0520 04:08:37.397357    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m03
	I0520 04:08:37.397388    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.397388    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.397458    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.402211    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.597055    8140 request.go:629] Waited for 192.9502ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:37.597055    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:37.597055    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.597353    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.597353    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.602235    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.604228    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.604302    8140 pod_ready.go:81] duration metric: took 401.3972ms for pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.604302    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.800120    8140 request.go:629] Waited for 195.4894ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:08:37.800382    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:08:37.800382    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.800444    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.800444    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.805185    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.987466    8140 request.go:629] Waited for 179.9799ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:37.987466    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:37.987466    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.987466    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.987691    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.992873    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:37.994327    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.994327    8140 pod_ready.go:81] duration metric: took 389.9599ms for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.994389    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.195282    8140 request.go:629] Waited for 200.5386ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:08:38.195472    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:08:38.195574    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.195574    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.195574    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.200312    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:38.399279    8140 request.go:629] Waited for 196.8346ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:38.399390    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:38.399482    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.399482    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.399482    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.406421    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:38.406624    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:38.406624    8140 pod_ready.go:81] duration metric: took 412.2344ms for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.406624    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.595145    8140 request.go:629] Waited for 188.5206ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m03
	I0520 04:08:38.595145    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m03
	I0520 04:08:38.595145    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.595145    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.595145    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.599049    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:38.799994    8140 request.go:629] Waited for 199.5237ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:38.800108    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:38.800108    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.800263    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.800263    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.805621    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:38.807012    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:38.807012    8140 pod_ready.go:81] duration metric: took 400.3877ms for pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.807012    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.985717    8140 request.go:629] Waited for 177.9129ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:08:38.985830    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:08:38.985830    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.985958    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.985958    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.991332    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:39.188202    8140 request.go:629] Waited for 195.7834ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:39.188452    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:39.188511    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.188511    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.188511    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.194248    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:39.195451    8140 pod_ready.go:92] pod "kube-proxy-94csf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.195451    8140 pod_ready.go:81] duration metric: took 388.4379ms for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.195451    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qg9wf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.390538    8140 request.go:629] Waited for 195.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qg9wf
	I0520 04:08:39.390538    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qg9wf
	I0520 04:08:39.390538    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.390538    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.390538    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.399199    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:08:39.593956    8140 request.go:629] Waited for 193.5558ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:39.594252    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:39.594411    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.594459    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.594459    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.598092    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:39.599506    8140 pod_ready.go:92] pod "kube-proxy-qg9wf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.599506    8140 pod_ready.go:81] duration metric: took 404.0549ms for pod "kube-proxy-qg9wf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.599506    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.797395    8140 request.go:629] Waited for 197.8885ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:08:39.797787    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:08:39.797787    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.797787    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.797787    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.804623    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:39.985402    8140 request.go:629] Waited for 179.6838ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:39.985484    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:39.985484    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.985484    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.985484    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.992284    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:39.993159    8140 pod_ready.go:92] pod "kube-proxy-xq4tv" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.993159    8140 pod_ready.go:81] duration metric: took 393.6523ms for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.993159    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.187181    8140 request.go:629] Waited for 193.8794ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:08:40.187427    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:08:40.187427    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.187544    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.187544    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.193015    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:40.391516    8140 request.go:629] Waited for 197.1153ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:40.391702    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:40.391786    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.391786    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.391786    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.396047    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:40.397723    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:40.397723    8140 pod_ready.go:81] duration metric: took 404.5627ms for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.397783    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.593760    8140 request.go:629] Waited for 195.5703ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:08:40.594063    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:08:40.594063    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.594112    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.594112    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.601940    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:40.799065    8140 request.go:629] Waited for 195.9465ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:40.799174    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:40.799249    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.799249    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.799249    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.804646    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:40.806917    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:40.806917    8140 pod_ready.go:81] duration metric: took 409.1338ms for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.806917    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.989453    8140 request.go:629] Waited for 182.201ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m03
	I0520 04:08:40.989609    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m03
	I0520 04:08:40.989733    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.989799    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.989824    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.995958    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:41.191888    8140 request.go:629] Waited for 193.7904ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:41.192176    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:41.192176    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.192176    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.192176    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.203935    8140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 04:08:41.204761    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:41.204815    8140 pod_ready.go:81] duration metric: took 397.8973ms for pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:41.204815    8140 pod_ready.go:38] duration metric: took 5.2139068s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:08:41.204865    8140 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:08:41.217883    8140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:08:41.252275    8140 api_server.go:72] duration metric: took 18.7589511s to wait for apiserver process to appear ...
	I0520 04:08:41.252275    8140 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:08:41.252275    8140 api_server.go:253] Checking apiserver healthz at https://172.25.246.119:8443/healthz ...
	I0520 04:08:41.263288    8140 api_server.go:279] https://172.25.246.119:8443/healthz returned 200:
	ok
	I0520 04:08:41.263288    8140 round_trippers.go:463] GET https://172.25.246.119:8443/version
	I0520 04:08:41.263288    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.263288    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.263288    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.265654    8140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 04:08:41.265775    8140 api_server.go:141] control plane version: v1.30.1
	I0520 04:08:41.265838    8140 api_server.go:131] duration metric: took 13.5634ms to wait for apiserver health ...
	I0520 04:08:41.265896    8140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 04:08:41.394479    8140 request.go:629] Waited for 128.532ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.394912    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.395014    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.395014    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.395014    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.404880    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:08:41.415801    8140 system_pods.go:59] 24 kube-system pods found
	I0520 04:08:41.415801    8140 system_pods.go:61] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700-m03" [321ff776-654f-4a7b-9973-5b6a672438b1] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-vdmtq" [12f186e5-765c-4bfe-aecc-91080f16c74d] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700-m03" [95739f9c-0bd0-4323-8b37-78d67b268722] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m03" [087a33f0-bb7c-461c-8e4c-cb4e6198ea7a] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-qg9wf" [a66bf2e1-d8ed-4adf-b10c-71286a6f6856] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700-m03" [93c0b454-c40e-46ad-87c9-7afee261f119] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700-m03" [10ac78f8-a12a-448b-8a5d-b456ae2c0a75] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:08:41.416413    8140 system_pods.go:74] duration metric: took 150.5167ms to wait for pod list to return data ...
	I0520 04:08:41.416413    8140 default_sa.go:34] waiting for default service account to be created ...
	I0520 04:08:41.595629    8140 request.go:629] Waited for 179.0524ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:08:41.595886    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:08:41.595964    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.595964    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.595964    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.600705    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:41.601432    8140 default_sa.go:45] found service account: "default"
	I0520 04:08:41.601432    8140 default_sa.go:55] duration metric: took 185.0181ms for default service account to be created ...
	I0520 04:08:41.601432    8140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 04:08:41.798156    8140 request.go:629] Waited for 196.5103ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.798262    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.798262    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.798262    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.798262    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.810861    8140 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 04:08:41.820595    8140 system_pods.go:86] 24 kube-system pods found
	I0520 04:08:41.820635    8140 system_pods.go:89] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700-m03" [321ff776-654f-4a7b-9973-5b6a672438b1] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-vdmtq" [12f186e5-765c-4bfe-aecc-91080f16c74d] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700-m03" [95739f9c-0bd0-4323-8b37-78d67b268722] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m03" [087a33f0-bb7c-461c-8e4c-cb4e6198ea7a] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-qg9wf" [a66bf2e1-d8ed-4adf-b10c-71286a6f6856] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700-m03" [93c0b454-c40e-46ad-87c9-7afee261f119] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700-m03" [10ac78f8-a12a-448b-8a5d-b456ae2c0a75] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:08:41.820635    8140 system_pods.go:126] duration metric: took 219.2029ms to wait for k8s-apps to be running ...
	I0520 04:08:41.820635    8140 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 04:08:41.835562    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:08:41.861752    8140 system_svc.go:56] duration metric: took 41.1171ms WaitForService to wait for kubelet
	I0520 04:08:41.861825    8140 kubeadm.go:576] duration metric: took 19.3684662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:08:41.861869    8140 node_conditions.go:102] verifying NodePressure condition ...
	I0520 04:08:41.985546    8140 request.go:629] Waited for 123.3188ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes
	I0520 04:08:41.985546    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes
	I0520 04:08:41.985546    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.985546    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.985546    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:42.001878    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:08:42.003481    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003539    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003539    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003539    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003539    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003621    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003621    8140 node_conditions.go:105] duration metric: took 141.7526ms to run NodePressure ...
	I0520 04:08:42.003621    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:08:42.003682    8140 start.go:254] writing updated cluster config ...
	I0520 04:08:42.017550    8140 ssh_runner.go:195] Run: rm -f paused
	I0520 04:08:42.173136    8140 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 04:08:42.176901    8140 out.go:177] * Done! kubectl is now configured to use "ha-291700" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 20 11:00:48 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:00:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/00680e87d9b50187e656998e544c9798d36a2844cfd52f55a34aa1ae1d0a9c96/resolv.conf as [nameserver 172.25.240.1]"
	May 20 11:00:48 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:00:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4369c807fc4da7d7e17fdd3439df2f5b2fe53f1125bb5ed0261266126c926e2/resolv.conf as [nameserver 172.25.240.1]"
	May 20 11:00:48 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:00:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/841ca5a27ffe789d535a3490a5e7f551d6465aa749d9ab87fb653d5624eec006/resolv.conf as [nameserver 172.25.240.1]"
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.389763795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.390026394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.390045194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.390691691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565129318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565328117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565348417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565854715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571383494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571506693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571541093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571663893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185697497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185830596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185848596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.186412694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:09:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25e887ed0ea02f96e2033349269707177648515aee3e13d0ee9f7bd9a5aa2d79/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 11:09:23 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:09:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004132614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004356714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004410814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004697114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a097917d5adbc       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   25e887ed0ea02       busybox-fc5497c4f-mw76w
	3d297fccb427c       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   841ca5a27ffe7       coredns-7db6d8ff4d-gglsg
	09c232a7fe7e5       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   00680e87d9b50       coredns-7db6d8ff4d-4hczp
	5e4ba8270bed1       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   d4369c807fc4d       storage-provisioner
	7534bdef6bb33       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   88196fa9961d3       kindnet-kmktc
	32c1915a2e00e       747097150317f                                                                                         9 minutes ago        Running             kube-proxy                0                   3a957403893c5       kube-proxy-xq4tv
	78ba28a57aa21       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   5abdcfb1f2b5a       kube-vip-ha-291700
	bac466f3cb7a4       a52dc94f0a912                                                                                         10 minutes ago       Running             kube-scheduler            0                   2b4cf80fdf2bb       kube-scheduler-ha-291700
	290a4be470427       25a1387cdab82                                                                                         10 minutes ago       Running             kube-controller-manager   0                   49d1fcba87695       kube-controller-manager-ha-291700
	7f57044b1f70d       91be940803172                                                                                         10 minutes ago       Running             kube-apiserver            0                   cb147cc0e7076       kube-apiserver-ha-291700
	2a187608d3c68       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   fe571feda5f80       etcd-ha-291700
	
	
	==> coredns [09c232a7fe7e] <==
	[INFO] 10.244.2.2:45969 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001005s
	[INFO] 10.244.2.2:42189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.022200702s
	[INFO] 10.244.2.2:51932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001475s
	[INFO] 10.244.2.2:40364 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001901s
	[INFO] 10.244.2.2:49544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012833401s
	[INFO] 10.244.0.4:54139 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001521s
	[INFO] 10.244.0.4:38161 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000622s
	[INFO] 10.244.0.4:49859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000877s
	[INFO] 10.244.0.4:53896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000081s
	[INFO] 10.244.0.4:42252 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001581s
	[INFO] 10.244.0.4:45872 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0004665s
	[INFO] 10.244.1.2:49750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001148s
	[INFO] 10.244.1.2:44851 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000849s
	[INFO] 10.244.1.2:57033 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001421s
	[INFO] 10.244.2.2:52593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002442s
	[INFO] 10.244.2.2:43583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081s
	[INFO] 10.244.0.4:47883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001217s
	[INFO] 10.244.0.4:37129 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176201s
	[INFO] 10.244.1.2:36237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000772s
	[INFO] 10.244.2.2:52455 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002387s
	[INFO] 10.244.2.2:57533 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000934s
	[INFO] 10.244.0.4:33879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187s
	[INFO] 10.244.0.4:49457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174801s
	[INFO] 10.244.1.2:60139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002037s
	[INFO] 10.244.1.2:43968 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001128s
	
	
	==> coredns [3d297fccb427] <==
	[INFO] 10.244.1.2:35967 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000099s
	[INFO] 10.244.1.2:55663 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.024044752s
	[INFO] 10.244.2.2:50767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001264s
	[INFO] 10.244.2.2:46797 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.2.2:52977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002519s
	[INFO] 10.244.0.4:38084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002047s
	[INFO] 10.244.0.4:34960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000987s
	[INFO] 10.244.1.2:40319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.080228512s
	[INFO] 10.244.1.2:57732 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003277s
	[INFO] 10.244.1.2:33154 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002549s
	[INFO] 10.244.1.2:42569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012372802s
	[INFO] 10.244.1.2:55813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000618s
	[INFO] 10.244.2.2:51764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002585s
	[INFO] 10.244.2.2:34629 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000609s
	[INFO] 10.244.0.4:57039 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001498s
	[INFO] 10.244.0.4:52530 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001986s
	[INFO] 10.244.1.2:50976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001901s
	[INFO] 10.244.1.2:60696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000619s
	[INFO] 10.244.1.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001456s
	[INFO] 10.244.2.2:57839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001713s
	[INFO] 10.244.2.2:48189 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001798s
	[INFO] 10.244.0.4:32914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002553s
	[INFO] 10.244.0.4:50478 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000595s
	[INFO] 10.244.1.2:48046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003657s
	[INFO] 10.244.1.2:53608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001944s
	
	
	==> describe nodes <==
	Name:               ha-291700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T04_00_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:09:54 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:09:54 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:09:54 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:09:54 +0000   Mon, 20 May 2024 11:00:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.246.119
	  Hostname:    ha-291700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba3a75213dec414eb3ca40f5e8b787a6
	  System UUID:                1bf698ac-7375-c44d-af40-b09309c0ada8
	  Boot ID:                    9daea59b-2ac2-44db-b81f-2140148dd0a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mw76w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 coredns-7db6d8ff4d-4hczp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 coredns-7db6d8ff4d-gglsg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 etcd-ha-291700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-kmktc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-apiserver-ha-291700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-291700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-xq4tv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-scheduler-ha-291700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-291700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m53s  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-291700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-291700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-291700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-291700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m55s  node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	  Normal  NodeReady                9m43s  kubelet          Node ha-291700 status is now: NodeReady
	  Normal  RegisteredNode           5m52s  node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	  Normal  RegisteredNode           114s   node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	
	
	Name:               ha-291700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T04_04_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:04:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:10:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:09:53 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:09:53 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:09:53 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:09:53 +0000   Mon, 20 May 2024 11:04:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.251.208
	  Hostname:    ha-291700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3554ae6a627e456685a6463794338840
	  System UUID:                a11b3769-33c5-2a4a-83c1-fcb6337901f4
	  Boot ID:                    586f963f-f3bf-4b1e-987d-f03d167c3bd0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qxg28                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-ha-291700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-2sqwt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m13s
	  kube-system                 kube-apiserver-ha-291700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-ha-291700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-proxy-94csf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-scheduler-ha-291700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-vip-ha-291700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node ha-291700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s (x8 over 6m13s)  kubelet          Node ha-291700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node ha-291700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	  Normal  RegisteredNode           5m52s                  node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	  Normal  RegisteredNode           114s                   node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	
	
	Name:               ha-291700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T04_08_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:08:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:10:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:09:46 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:09:46 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:09:46 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:09:46 +0000   Mon, 20 May 2024 11:08:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.246.110
	  Hostname:    ha-291700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6effe892fc604c7ca670493903395dda
	  System UUID:                1afc6042-4034-c244-b427-bbf53c43dbc9
	  Boot ID:                    56458d60-9ccb-4191-b42c-5cbabce2dfac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bghlc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-ha-291700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m12s
	  kube-system                 kindnet-vdmtq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m16s
	  kube-system                 kube-apiserver-ha-291700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-controller-manager-ha-291700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-proxy-qg9wf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-ha-291700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-vip-ha-291700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node ha-291700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node ha-291700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node ha-291700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	  Normal  RegisteredNode           114s                   node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	
	
	==> dmesg <==
	[  +6.878927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000071] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 10:59] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.182683] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[ +32.195135] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.115863] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.573509] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.204047] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.242377] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.799310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.204162] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.226901] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.322093] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[May20 11:00] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.105267] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.530810] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +5.508373] systemd-fstab-generator[1712]: Ignoring "noauto" option for root device
	[  +0.107158] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.027945] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.591477] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[ +13.802267] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.799365] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.126041] kauditd_printk_skb: 19 callbacks suppressed
	[May20 11:04] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2a187608d3c6] <==
	{"level":"warn","ts":"2024-05-20T11:08:15.732382Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.25.246.110:2380/version","remote-member-id":"d0ba6f6216eaddae","error":"Get \"https://172.25.246.110:2380/version\": dial tcp 172.25.246.110:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T11:08:15.732598Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d0ba6f6216eaddae","error":"Get \"https://172.25.246.110:2380/version\": dial tcp 172.25.246.110:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T11:08:16.683373Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d0ba6f6216eaddae","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-20T11:08:17.735656Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d0ba6f6216eaddae","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-05-20T11:08:18.072881Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d0ba6f6216eaddae"}
	{"level":"info","ts":"2024-05-20T11:08:18.083181Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1f16a871a6f5df87","remote-peer-id":"d0ba6f6216eaddae"}
	{"level":"info","ts":"2024-05-20T11:08:18.083281Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1f16a871a6f5df87","remote-peer-id":"d0ba6f6216eaddae"}
	{"level":"info","ts":"2024-05-20T11:08:18.181168Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1f16a871a6f5df87","to":"d0ba6f6216eaddae","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T11:08:18.181291Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1f16a871a6f5df87","remote-peer-id":"d0ba6f6216eaddae"}
	{"level":"info","ts":"2024-05-20T11:08:18.23884Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1f16a871a6f5df87","to":"d0ba6f6216eaddae","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T11:08:18.238897Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1f16a871a6f5df87","remote-peer-id":"d0ba6f6216eaddae"}
	{"level":"warn","ts":"2024-05-20T11:08:18.68483Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d0ba6f6216eaddae","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-20T11:08:19.683518Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d0ba6f6216eaddae","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-05-20T11:08:21.192549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f16a871a6f5df87 switched to configuration voters=(2240163070749302663 7997178419587767282 15040456372639161774)"}
	{"level":"info","ts":"2024-05-20T11:08:21.192955Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"470272b7ead657db","local-member-id":"1f16a871a6f5df87"}
	{"level":"info","ts":"2024-05-20T11:08:21.193154Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"1f16a871a6f5df87","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"d0ba6f6216eaddae"}
	{"level":"warn","ts":"2024-05-20T11:08:29.152559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.19324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:436"}
	{"level":"info","ts":"2024-05-20T11:08:29.152668Z","caller":"traceutil/trace.go:171","msg":"trace[2048412590] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1549; }","duration":"163.347838ms","start":"2024-05-20T11:08:28.989306Z","end":"2024-05-20T11:08:29.152654Z","steps":["trace[2048412590] 'range keys from in-memory index tree'  (duration: 161.706961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:08:29.153627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.191312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-291700-m03\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-05-20T11:08:29.153727Z","caller":"traceutil/trace.go:171","msg":"trace[276489923] range","detail":"{range_begin:/registry/minions/ha-291700-m03; range_end:; response_count:1; response_revision:1549; }","duration":"165.34341ms","start":"2024-05-20T11:08:28.988371Z","end":"2024-05-20T11:08:29.153714Z","steps":["trace[276489923] 'range keys from in-memory index tree'  (duration: 163.056342ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:08:29.405633Z","caller":"traceutil/trace.go:171","msg":"trace[838122375] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"113.058636ms","start":"2024-05-20T11:08:29.292558Z","end":"2024-05-20T11:08:29.405617Z","steps":["trace[838122375] 'process raft request'  (duration: 113.007437ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:08:29.405869Z","caller":"traceutil/trace.go:171","msg":"trace[1310175402] transaction","detail":"{read_only:false; response_revision:1550; number_of_response:1; }","duration":"245.09571ms","start":"2024-05-20T11:08:29.16076Z","end":"2024-05-20T11:08:29.405855Z","steps":["trace[1310175402] 'process raft request'  (duration: 244.637816ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.781434Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1035}
	{"level":"info","ts":"2024-05-20T11:10:15.877882Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1035,"took":"95.442719ms","hash":3224319738,"current-db-size-bytes":3592192,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2109440,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T11:10:15.878057Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3224319738,"revision":1035,"compact-revision":-1}
	
	
	==> kernel <==
	 11:10:31 up 12 min,  0 users,  load average: 0.54, 0.49, 0.33
	Linux ha-291700 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7534bdef6bb3] <==
	I0520 11:09:45.384242       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:09:55.410914       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:09:55.411062       1 main.go:227] handling current node
	I0520 11:09:55.411077       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:09:55.411085       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:09:55.411365       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:09:55.411394       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:10:05.429534       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:10:05.429633       1 main.go:227] handling current node
	I0520 11:10:05.429648       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:10:05.429658       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:10:05.430144       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:10:05.430161       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:10:15.442833       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:10:15.442952       1 main.go:227] handling current node
	I0520 11:10:15.442973       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:10:15.443788       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:10:15.444279       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:10:15.444369       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:10:25.453802       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:10:25.453852       1 main.go:227] handling current node
	I0520 11:10:25.453866       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:10:25.454061       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:10:25.454510       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:10:25.454565       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7f57044b1f70] <==
	Trace[341724212]: ---"About to write a response" 582ms (11:07:53.760)
	Trace[341724212]: [582.630949ms] [582.630949ms] END
	I0520 11:07:53.861751       1 trace.go:236] Trace[1922809812]: "List" accept:application/json, */*,audit-id:373dbc50-292e-449a-984c-ff3ba5a1d8f9,client:172.25.251.208,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:cluster,url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (20-May-2024 11:07:53.098) (total time: 762ms):
	Trace[1922809812]: ["List(recursive=true) etcd3" audit-id:373dbc50-292e-449a-984c-ff3ba5a1d8f9,key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: 762ms (11:07:53.098)]
	Trace[1922809812]: [762.959254ms] [762.959254ms] END
	E0520 11:08:15.328456       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 11:08:15.329132       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 11:08:15.329106       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 12.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 11:08:15.330495       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 11:08:15.330774       1 timeout.go:142] post-timeout activity - time-elapsed: 2.386258ms, PATCH "/api/v1/namespaces/default/events/ha-291700-m03.17d12dcdaf39de48" result: <nil>
	E0520 11:09:31.182253       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61759: use of closed network connection
	E0520 11:09:31.649142       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61762: use of closed network connection
	E0520 11:09:33.131221       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61764: use of closed network connection
	E0520 11:09:33.649756       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61766: use of closed network connection
	E0520 11:09:34.120840       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61768: use of closed network connection
	E0520 11:09:34.700197       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61770: use of closed network connection
	E0520 11:09:35.160346       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61772: use of closed network connection
	E0520 11:09:35.652084       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61774: use of closed network connection
	E0520 11:09:36.113270       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61776: use of closed network connection
	E0520 11:09:36.907315       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61779: use of closed network connection
	E0520 11:09:47.372276       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61781: use of closed network connection
	E0520 11:09:47.848922       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61784: use of closed network connection
	E0520 11:09:58.309706       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61786: use of closed network connection
	E0520 11:09:58.764663       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61788: use of closed network connection
	E0520 11:10:09.222195       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61790: use of closed network connection
	
	
	==> kube-controller-manager [290a4be47042] <==
	I0520 11:08:15.434610       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-291700-m03"
	I0520 11:09:21.420714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="176.081969ms"
	I0520 11:09:21.518592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.797694ms"
	I0520 11:09:21.677936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.148339ms"
	I0520 11:09:22.006615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="328.569636ms"
	E0520 11:09:22.007126       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0520 11:09:22.046669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.298142ms"
	I0520 11:09:22.124606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.62149ms"
	I0520 11:09:22.125406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="671.297µs"
	I0520 11:09:22.305625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.399µs"
	I0520 11:09:23.426054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.4µs"
	I0520 11:09:23.618045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.7µs"
	I0520 11:09:23.696193       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.8µs"
	I0520 11:09:23.709482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.2µs"
	I0520 11:09:23.739765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.5µs"
	I0520 11:09:23.764566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125µs"
	I0520 11:09:23.785591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.1µs"
	I0520 11:09:24.599066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.427265ms"
	I0520 11:09:24.599528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="271.2µs"
	I0520 11:09:24.718090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.468793ms"
	I0520 11:09:24.718718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.3µs"
	I0520 11:09:25.641082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.5µs"
	I0520 11:09:25.674477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.3µs"
	I0520 11:09:28.540601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.278793ms"
	I0520 11:09:28.540695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.2µs"
	
	
	==> kube-proxy [32c1915a2e00] <==
	I0520 11:00:37.083671       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:00:37.098636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.246.119"]
	I0520 11:00:37.201278       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:00:37.201324       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:00:37.201343       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:00:37.205129       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:00:37.205524       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:00:37.205861       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:00:37.207223       1 config.go:192] "Starting service config controller"
	I0520 11:00:37.207399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:00:37.207583       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:00:37.207793       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:00:37.208531       1 config.go:319] "Starting node config controller"
	I0520 11:00:37.209754       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:00:37.307837       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:00:37.309194       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:00:37.310221       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bac466f3cb7a] <==
	W0520 11:00:20.088940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:00:20.089020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:00:20.114740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:00:20.114823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:00:20.266313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:00:20.266384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:00:20.272933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:00:20.273385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:00:20.294610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:00:20.294697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0520 11:00:21.775200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:09:21.375594       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="0fcf49f9-053c-4345-9b79-044a9cf79f4c" pod="default/busybox-fc5497c4f-qxg28" assumedNode="ha-291700-m02" currentNode="ha-291700"
	I0520 11:09:21.387414       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="25cad6b2-46c1-4591-8bc4-c096c9866cfe" pod="default/busybox-fc5497c4f-sj7kv" assumedNode="ha-291700-m03" currentNode="ha-291700-m02"
	E0520 11:09:21.419242       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qxg28\": pod busybox-fc5497c4f-qxg28 is already assigned to node \"ha-291700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qxg28" node="ha-291700"
	E0520 11:09:21.419504       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0fcf49f9-053c-4345-9b79-044a9cf79f4c(default/busybox-fc5497c4f-qxg28) was assumed on ha-291700 but assigned to ha-291700-m02" pod="default/busybox-fc5497c4f-qxg28"
	E0520 11:09:21.419551       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qxg28\": pod busybox-fc5497c4f-qxg28 is already assigned to node \"ha-291700-m02\"" pod="default/busybox-fc5497c4f-qxg28"
	I0520 11:09:21.419619       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qxg28" node="ha-291700-m02"
	E0520 11:09:21.423531       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sj7kv\": pod busybox-fc5497c4f-sj7kv is already assigned to node \"ha-291700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-sj7kv" node="ha-291700-m02"
	E0520 11:09:21.423592       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 25cad6b2-46c1-4591-8bc4-c096c9866cfe(default/busybox-fc5497c4f-sj7kv) was assumed on ha-291700-m02 but assigned to ha-291700-m03" pod="default/busybox-fc5497c4f-sj7kv"
	E0520 11:09:21.423610       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sj7kv\": pod busybox-fc5497c4f-sj7kv is already assigned to node \"ha-291700-m03\"" pod="default/busybox-fc5497c4f-sj7kv"
	I0520 11:09:21.423632       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-sj7kv" node="ha-291700-m03"
	E0520 11:09:21.569128       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mw76w\": pod busybox-fc5497c4f-mw76w is already assigned to node \"ha-291700\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mw76w" node="ha-291700"
	E0520 11:09:21.571103       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0710240d-711a-43a0-bbee-82236e00bbef(default/busybox-fc5497c4f-mw76w) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mw76w"
	E0520 11:09:21.571309       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mw76w\": pod busybox-fc5497c4f-mw76w is already assigned to node \"ha-291700\"" pod="default/busybox-fc5497c4f-mw76w"
	I0520 11:09:21.571752       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mw76w" node="ha-291700"
	
	
	==> kubelet <==
	May 20 11:06:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:07:23 ha-291700 kubelet[2213]: E0520 11:07:23.369645    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:07:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:07:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:07:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:07:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:08:23 ha-291700 kubelet[2213]: E0520 11:08:23.366507    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:08:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:08:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:08:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:08:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:09:21 ha-291700 kubelet[2213]: I0520 11:09:21.555658    2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=517.555628287 podStartE2EDuration="8m37.555628287s" podCreationTimestamp="2024-05-20 11:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:00:49.897686801 +0000 UTC m=+26.785477024" watchObservedRunningTime="2024-05-20 11:09:21.555628287 +0000 UTC m=+538.443418510"
	May 20 11:09:21 ha-291700 kubelet[2213]: I0520 11:09:21.556376    2213 topology_manager.go:215] "Topology Admit Handler" podUID="0710240d-711a-43a0-bbee-82236e00bbef" podNamespace="default" podName="busybox-fc5497c4f-mw76w"
	May 20 11:09:21 ha-291700 kubelet[2213]: I0520 11:09:21.679975    2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjmm\" (UniqueName: \"kubernetes.io/projected/0710240d-711a-43a0-bbee-82236e00bbef-kube-api-access-lwjmm\") pod \"busybox-fc5497c4f-mw76w\" (UID: \"0710240d-711a-43a0-bbee-82236e00bbef\") " pod="default/busybox-fc5497c4f-mw76w"
	May 20 11:09:22 ha-291700 kubelet[2213]: I0520 11:09:22.407306    2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e887ed0ea02f96e2033349269707177648515aee3e13d0ee9f7bd9a5aa2d79"
	May 20 11:09:23 ha-291700 kubelet[2213]: E0520 11:09:23.390087    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:09:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:09:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:09:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:09:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:10:23 ha-291700 kubelet[2213]: E0520 11:10:23.375637    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:10:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:10:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:10:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:10:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:10:22.261551    5768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-291700 -n ha-291700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-291700 -n ha-291700: (13.0050674s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-291700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (61.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-291700 node stop m02 -v=7 --alsologtostderr: exit status 1 (18.7038594s)

                                                
                                                
-- stdout --
	* Stopping node "ha-291700-m02"  ...
	* Powering off "ha-291700-m02" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:26:52.985731    4444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 04:26:52.993253    4444 out.go:291] Setting OutFile to fd 1952 ...
	I0520 04:26:53.009915    4444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:53.009915    4444 out.go:304] Setting ErrFile to fd 1944...
	I0520 04:26:53.009915    4444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:53.026659    4444 mustload.go:65] Loading cluster: ha-291700
	I0520 04:26:53.028065    4444 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:53.028312    4444 stop.go:39] StopHost: ha-291700-m02
	I0520 04:26:53.034533    4444 out.go:177] * Stopping node "ha-291700-m02"  ...
	I0520 04:26:53.037552    4444 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 04:26:53.056995    4444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 04:26:53.057593    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:26:55.363798    4444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:26:55.363798    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:26:55.364033    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:26:58.103692    4444 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:26:58.103692    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:26:58.104581    4444 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:26:58.226474    4444 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.1694692s)
	I0520 04:26:58.241245    4444 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 04:26:58.332972    4444 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 04:26:58.381590    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:27:00.673519    4444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:27:00.673519    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:27:00.676664    4444 out.go:177] * Powering off "ha-291700-m02" via SSH ...
	I0520 04:27:00.679141    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:27:03.003135    4444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:27:03.003135    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:27:03.003135    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:27:05.690634    4444 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:27:05.690716    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:27:05.696362    4444 main.go:141] libmachine: Using SSH client type: native
	I0520 04:27:05.697008    4444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:27:05.697008    4444 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0520 04:27:05.865370    4444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:27:05.865502    4444 stop.go:100] poweroff result: out=, err=<nil>
	I0520 04:27:05.865502    4444 main.go:141] libmachine: Stopping "ha-291700-m02"...
	I0520 04:27:05.865502    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:27:08.929333    4444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:27:08.929333    4444 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:27:08.930159    4444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM ha-291700-m02

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-windows-amd64.exe -p ha-291700 node stop m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr: context deadline exceeded (161.9µs)
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-291700 -n ha-291700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-291700 -n ha-291700: (13.0267569s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 logs -n 25: (15.1835519s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:22 PDT | 20 May 24 04:22 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:22 PDT | 20 May 24 04:22 PDT |
	|         | ha-291700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:22 PDT | 20 May 24 04:22 PDT |
	|         | ha-291700:/home/docker/cp-test_ha-291700-m03_ha-291700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:22 PDT | 20 May 24 04:22 PDT |
	|         | ha-291700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700 sudo cat                                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:22 PDT | 20 May 24 04:23 PDT |
	|         | /home/docker/cp-test_ha-291700-m03_ha-291700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:23 PDT | 20 May 24 04:23 PDT |
	|         | ha-291700-m02:/home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:23 PDT | 20 May 24 04:23 PDT |
	|         | ha-291700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700-m02 sudo cat                                                                                   | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:23 PDT | 20 May 24 04:23 PDT |
	|         | /home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:23 PDT | 20 May 24 04:23 PDT |
	|         | ha-291700-m04:/home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:23 PDT | 20 May 24 04:24 PDT |
	|         | ha-291700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700-m04 sudo cat                                                                                   | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:24 PDT |
	|         | /home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-291700 cp testdata\cp-test.txt                                                                                         | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:24 PDT |
	|         | ha-291700-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:24 PDT |
	|         | ha-291700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:24 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:24 PDT |
	|         | ha-291700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:24 PDT | 20 May 24 04:25 PDT |
	|         | ha-291700:/home/docker/cp-test_ha-291700-m04_ha-291700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:25 PDT | 20 May 24 04:25 PDT |
	|         | ha-291700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700 sudo cat                                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:25 PDT | 20 May 24 04:25 PDT |
	|         | /home/docker/cp-test_ha-291700-m04_ha-291700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:25 PDT | 20 May 24 04:25 PDT |
	|         | ha-291700-m02:/home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:25 PDT | 20 May 24 04:26 PDT |
	|         | ha-291700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700-m02 sudo cat                                                                                   | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:26 PDT | 20 May 24 04:26 PDT |
	|         | /home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt                                                                       | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:26 PDT | 20 May 24 04:26 PDT |
	|         | ha-291700-m03:/home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n                                                                                                          | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:26 PDT | 20 May 24 04:26 PDT |
	|         | ha-291700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-291700 ssh -n ha-291700-m03 sudo cat                                                                                   | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:26 PDT | 20 May 24 04:26 PDT |
	|         | /home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-291700 node stop m02 -v=7                                                                                              | ha-291700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:26 PDT |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:57:11
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:57:11.677213    8140 out.go:291] Setting OutFile to fd 1060 ...
	I0520 03:57:11.677839    8140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:57:11.677839    8140 out.go:304] Setting ErrFile to fd 1372...
	I0520 03:57:11.677839    8140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:57:11.705674    8140 out.go:298] Setting JSON to false
	I0520 03:57:11.709708    8140 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2628,"bootTime":1716200003,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:57:11.709708    8140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:57:11.713513    8140 out.go:177] * [ha-291700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:57:11.719705    8140 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:57:11.719705    8140 notify.go:220] Checking for updates...
	I0520 03:57:11.724317    8140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:57:11.727701    8140 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:57:11.730419    8140 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:57:11.734212    8140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:57:11.737254    8140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:57:17.221655    8140 out.go:177] * Using the hyperv driver based on user configuration
	I0520 03:57:17.224692    8140 start.go:297] selected driver: hyperv
	I0520 03:57:17.224692    8140 start.go:901] validating driver "hyperv" against <nil>
	I0520 03:57:17.224692    8140 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:57:17.272938    8140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:57:17.273804    8140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:57:17.274388    8140 cni.go:84] Creating CNI manager for ""
	I0520 03:57:17.274388    8140 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 03:57:17.274388    8140 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 03:57:17.274591    8140 start.go:340] cluster config:
	{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:57:17.274591    8140 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:57:17.278673    8140 out.go:177] * Starting "ha-291700" primary control-plane node in "ha-291700" cluster
	I0520 03:57:17.280379    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:57:17.281341    8140 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 03:57:17.281341    8140 cache.go:56] Caching tarball of preloaded images
	I0520 03:57:17.281573    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 03:57:17.281878    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:57:17.282058    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 03:57:17.282642    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json: {Name:mk4e8fabedba09636c589d5d4a21388cc33f4a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:57:17.283666    8140 start.go:360] acquireMachinesLock for ha-291700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:57:17.283666    8140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-291700"
	I0520 03:57:17.283666    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:57:17.284319    8140 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 03:57:17.288063    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:57:17.288423    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 03:57:17.288545    8140 client.go:168] LocalClient.Create starting
	I0520 03:57:17.289279    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 03:57:17.289559    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 03:57:17.289559    8140 main.go:141] libmachine: Parsing certificate...
	I0520 03:57:17.290028    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 03:57:17.290262    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 03:57:17.290293    8140 main.go:141] libmachine: Parsing certificate...
	I0520 03:57:17.290419    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 03:57:19.388587    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 03:57:19.388674    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:19.388674    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 03:57:21.177001    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 03:57:21.177053    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:21.177053    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:57:22.743473    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:57:22.744562    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:22.744562    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:57:26.400485    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:57:26.400936    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:26.403588    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 03:57:26.875493    8140 main.go:141] libmachine: Creating SSH key...
	I0520 03:57:26.983659    8140 main.go:141] libmachine: Creating VM...
	I0520 03:57:26.983735    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 03:57:29.860829    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 03:57:29.860829    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:29.861271    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 03:57:29.861369    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 03:57:31.653955    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 03:57:31.654198    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:31.654286    8140 main.go:141] libmachine: Creating VHD
	I0520 03:57:31.654410    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 03:57:35.460910    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 19701806-2E15-4246-8309-72CFDE92B7AC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 03:57:35.461007    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:35.461007    8140 main.go:141] libmachine: Writing magic tar header
	I0520 03:57:35.461096    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 03:57:35.469078    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:38.671907    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd' -SizeBytes 20000MB
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:41.249545    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 03:57:44.953673    8140 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-291700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 03:57:44.953673    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:44.954648    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700 -DynamicMemoryEnabled $false
	I0520 03:57:47.257963    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:47.259006    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:47.259006    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700 -Count 2
	I0520 03:57:49.464266    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:49.464266    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:49.465432    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\boot2docker.iso'
	I0520 03:57:52.063199    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:52.063484    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:52.063568    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\disk.vhd'
	I0520 03:57:54.772593    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:54.772593    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:54.772774    8140 main.go:141] libmachine: Starting VM...
	I0520 03:57:54.772774    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700
	I0520 03:57:57.880530    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:57:57.880530    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:57:57.880718    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 03:57:57.880718    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:00.269802    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:00.269870    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:00.269870    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:02.916160    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:02.916160    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:03.922805    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:06.234981    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:06.235601    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:06.235681    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:08.944169    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:08.944169    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:09.947903    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:12.311823    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:12.312580    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:12.312642    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:14.963966    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:14.963966    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:15.970824    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:18.255575    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:18.256185    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:18.256185    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:20.909225    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 03:58:20.909225    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:21.914808    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:24.240165    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:24.240570    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:24.240682    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:26.888133    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:26.888133    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:26.888329    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:29.109528    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:29.109528    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:29.109528    8140 machine.go:94] provisionDockerMachine start ...
	I0520 03:58:29.110115    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:31.362984    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:31.362984    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:31.363763    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:34.045026    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:34.045839    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:34.051489    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:34.061724    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:34.061724    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:58:34.194737    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 03:58:34.194838    8140 buildroot.go:166] provisioning hostname "ha-291700"
	I0520 03:58:34.194989    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:36.406785    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:36.407870    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:36.407918    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:39.052577    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:39.053314    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:39.059484    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:39.060118    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:39.060118    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700 && echo "ha-291700" | sudo tee /etc/hostname
	I0520 03:58:39.230994    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700
	
	I0520 03:58:39.231594    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:41.434020    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:41.434020    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:41.434454    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:44.137380    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:44.137996    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:44.143306    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:58:44.143520    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:58:44.143520    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:58:44.300320    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:58:44.300320    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 03:58:44.300320    8140 buildroot.go:174] setting up certificates
	I0520 03:58:44.300320    8140 provision.go:84] configureAuth start
	I0520 03:58:44.301352    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:46.542389    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:46.542389    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:46.543329    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:49.217635    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:51.489627    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:51.490636    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:51.490694    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:54.137675    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:54.137675    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:54.137754    8140 provision.go:143] copyHostCerts
	I0520 03:58:54.137829    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 03:58:54.138281    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 03:58:54.138374    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 03:58:54.138766    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 03:58:54.140130    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 03:58:54.140383    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 03:58:54.140479    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 03:58:54.140926    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 03:58:54.142035    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 03:58:54.142358    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 03:58:54.142358    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 03:58:54.143030    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 03:58:54.144098    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700 san=[127.0.0.1 172.25.246.119 ha-291700 localhost minikube]
	I0520 03:58:54.308063    8140 provision.go:177] copyRemoteCerts
	I0520 03:58:54.322456    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:58:54.322456    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:58:56.570920    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:58:56.571457    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:56.571457    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:58:59.199980    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:58:59.199980    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:58:59.201119    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:58:59.312533    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9900239s)
	I0520 03:58:59.312607    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 03:58:59.313068    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:58:59.359134    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 03:58:59.359667    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 03:58:59.407395    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 03:58:59.408103    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 03:58:59.454516    8140 provision.go:87] duration metric: took 15.1540601s to configureAuth
	I0520 03:58:59.454589    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:58:59.455128    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:58:59.455188    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:01.726893    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:01.727004    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:01.727004    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:04.346376    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:04.346437    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:04.352089    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:04.352803    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:04.352803    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:59:04.498788    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:59:04.498788    8140 buildroot.go:70] root file system type: tmpfs
	I0520 03:59:04.499424    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:59:04.499522    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:06.713973    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:09.398058    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:09.398058    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:09.406499    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:09.406499    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:09.406499    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:59:09.561856    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:59:09.562009    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:11.776325    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:11.776658    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:11.776774    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:14.423430    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:14.424462    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:14.432173    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:14.432951    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:14.432951    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:59:16.600656    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 03:59:16.600656    8140 machine.go:97] duration metric: took 47.4910616s to provisionDockerMachine
	I0520 03:59:16.600656    8140 client.go:171] duration metric: took 1m59.3119451s to LocalClient.Create
	I0520 03:59:16.600656    8140 start.go:167] duration metric: took 1m59.312067s to libmachine.API.Create "ha-291700"
	I0520 03:59:16.600656    8140 start.go:293] postStartSetup for "ha-291700" (driver="hyperv")
	I0520 03:59:16.600656    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:59:16.614805    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:59:16.614805    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:18.812218    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:18.812218    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:18.813029    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:21.427905    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:21.427932    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:21.428102    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:21.531803    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9169304s)
	I0520 03:59:21.545657    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:59:21.551658    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 03:59:21.551658    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 03:59:21.551658    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 03:59:21.552652    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 03:59:21.552652    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 03:59:21.566784    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 03:59:21.585677    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 03:59:21.632018    8140 start.go:296] duration metric: took 5.0313549s for postStartSetup
	I0520 03:59:21.635809    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:23.831384    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:23.831384    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:23.831482    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:26.386891    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:26.386891    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:26.387346    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 03:59:26.390814    8140 start.go:128] duration metric: took 2m9.1063147s to createHost
	I0520 03:59:26.390889    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:28.587881    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:28.588770    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:28.588770    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:31.237127    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:31.237127    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:31.244708    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:31.245231    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:31.245231    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 03:59:31.376807    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202771.368877286
	
	I0520 03:59:31.376807    8140 fix.go:216] guest clock: 1716202771.368877286
	I0520 03:59:31.376807    8140 fix.go:229] Guest: 2024-05-20 03:59:31.368877286 -0700 PDT Remote: 2024-05-20 03:59:26.3908896 -0700 PDT m=+134.806739301 (delta=4.977987686s)
	I0520 03:59:31.376955    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:33.573407    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:33.573656    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:33.573719    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:36.190733    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:36.191756    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:36.197917    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 03:59:36.198076    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.119 22 <nil> <nil>}
	I0520 03:59:36.198076    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716202771
	I0520 03:59:36.348790    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 10:59:31 UTC 2024
	
	I0520 03:59:36.348840    8140 fix.go:236] clock set: Mon May 20 10:59:31 UTC 2024
	 (err=<nil>)
	I0520 03:59:36.348840    8140 start.go:83] releasing machines lock for "ha-291700", held for 2m19.0649801s
	I0520 03:59:36.348840    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:38.550170    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:41.127415    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:41.127415    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:41.133384    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:59:41.133563    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:41.145884    8140 ssh_runner.go:195] Run: cat /version.json
	I0520 03:59:41.145884    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 03:59:43.405200    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:43.405696    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:43.405813    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:43.430907    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 03:59:43.430907    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:43.431364    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 03:59:46.128106    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:46.128904    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:46.128904    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:46.155967    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 03:59:46.155967    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 03:59:46.156878    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 03:59:46.217549    8140 ssh_runner.go:235] Completed: cat /version.json: (5.0714936s)
	W0520 03:59:46.217549    8140 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 03:59:46.233107    8140 ssh_runner.go:195] Run: systemctl --version
	I0520 03:59:46.455694    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3222461s)
	I0520 03:59:46.469869    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:59:46.481363    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:59:46.493423    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 03:59:46.523897    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:59:46.524010    8140 start.go:494] detecting cgroup driver to use...
	I0520 03:59:46.524266    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:59:46.579011    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 03:59:46.621241    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:59:46.641381    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:59:46.654660    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:59:46.687899    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:59:46.722355    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:59:46.753932    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:59:46.789101    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:59:46.820349    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:59:46.857410    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:59:46.891315    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:59:46.921362    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:59:46.951780    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:59:46.981382    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:47.185079    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:59:47.226791    8140 start.go:494] detecting cgroup driver to use...
	I0520 03:59:47.238769    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:59:47.287770    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:59:47.327982    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:59:47.375428    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:59:47.409377    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:59:47.445178    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 03:59:47.512735    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:59:47.537813    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:59:47.581583    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:59:47.601562    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:59:47.619358    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:59:47.664221    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:59:47.865427    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:59:48.053930    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:59:48.054662    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:59:48.107658    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:48.307646    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:59:50.815240    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5075901s)
	I0520 03:59:50.830523    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:59:50.866822    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:59:50.907752    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:59:51.112012    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:59:51.323078    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:51.558474    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:59:51.611482    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:59:51.653346    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:59:51.864954    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:59:51.973953    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:59:51.987354    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:59:51.997139    8140 start.go:562] Will wait 60s for crictl version
	I0520 03:59:52.008621    8140 ssh_runner.go:195] Run: which crictl
	I0520 03:59:52.030577    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:59:52.082749    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 03:59:52.093416    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:59:52.131361    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:59:52.164390    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 03:59:52.164390    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 03:59:52.168850    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 03:59:52.172255    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 03:59:52.172882    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 03:59:52.185372    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 03:59:52.191178    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:59:52.226323    8140 kubeadm.go:877] updating cluster {Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 03:59:52.226323    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:59:52.236471    8140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:59:52.258126    8140 docker.go:685] Got preloaded images: 
	I0520 03:59:52.258204    8140 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 03:59:52.271911    8140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:59:52.304744    8140 ssh_runner.go:195] Run: which lz4
	I0520 03:59:52.310287    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 03:59:52.323193    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 03:59:52.329541    8140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:59:52.329541    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 03:59:54.317831    8140 docker.go:649] duration metric: took 2.0075404s to copy over tarball
	I0520 03:59:54.330919    8140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:00:02.828970    8140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4979683s)
	I0520 04:00:02.829041    8140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:00:02.896932    8140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:00:02.916375    8140 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 04:00:02.958781    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:00:03.183835    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:00:06.258501    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0746618s)
	I0520 04:00:06.269817    8140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:00:06.295748    8140 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:00:06.295827    8140 cache_images.go:84] Images are preloaded, skipping loading
	I0520 04:00:06.295827    8140 kubeadm.go:928] updating node { 172.25.246.119 8443 v1.30.1 docker true true} ...
	I0520 04:00:06.295902    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.246.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:00:06.305879    8140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:00:06.341542    8140 cni.go:84] Creating CNI manager for ""
	I0520 04:00:06.341542    8140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:00:06.341691    8140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:00:06.341691    8140 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.246.119 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-291700 NodeName:ha-291700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.246.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.246.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:00:06.341691    8140 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.246.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-291700"
	  kubeletExtraArgs:
	    node-ip: 172.25.246.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.246.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:00:06.341691    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:00:06.356452    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:00:06.382611    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:00:06.382866    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:00:06.395268    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:00:06.409421    8140 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:00:06.422174    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 04:00:06.438956    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0520 04:00:06.468446    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:00:06.498238    8140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 04:00:06.528812    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 04:00:06.569522    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:00:06.576249    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:00:06.612665    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:00:06.812820    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:00:06.843672    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.246.119
	I0520 04:00:06.843672    8140 certs.go:194] generating shared ca certs ...
	I0520 04:00:06.843672    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.844502    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:00:06.845096    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:00:06.845293    8140 certs.go:256] generating profile certs ...
	I0520 04:00:06.846114    8140 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:00:06.846239    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt with IP's: []
	I0520 04:00:06.980779    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt ...
	I0520 04:00:06.980779    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.crt: {Name:mkd2c14963adb4751d3090614d567f51986ff21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.983103    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key ...
	I0520 04:00:06.983103    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key: {Name:mk948fe68dbd2be6fca73a1daf0e8449e029c49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:06.983586    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f
	I0520 04:00:06.984697    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.255.254]
	I0520 04:00:07.127611    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f ...
	I0520 04:00:07.127611    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f: {Name:mk23cd13457bf6593f20ed27ae2e0a814b85ab74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.129254    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f ...
	I0520 04:00:07.129254    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f: {Name:mk4ed1c6beba67aa83ee8f47f02b788d813ee85d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.129840    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.622d349f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:00:07.140915    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.622d349f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:00:07.142950    8140 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:00:07.143224    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt with IP's: []
	I0520 04:00:07.264288    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt ...
	I0520 04:00:07.264288    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt: {Name:mkf98561677b3ccb212261e710a2825a6bdb74f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.266055    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key ...
	I0520 04:00:07.266055    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key: {Name:mk594ed759da3a7df8be676ed30b3bcaa23c6905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:07.267062    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:00:07.267752    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:00:07.267991    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:00:07.268217    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:00:07.268389    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:00:07.268389    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:00:07.268909    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:00:07.277117    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:00:07.277341    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:00:07.278018    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:00:07.278054    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:00:07.278262    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:00:07.278777    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:00:07.278995    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:00:07.279289    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:00:07.279289    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:00:07.281118    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:00:07.326334    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:00:07.373398    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:00:07.419039    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:00:07.460338    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 04:00:07.511330    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:00:07.553788    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:00:07.601851    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:00:07.646361    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:00:07.695010    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:00:07.737464    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:00:07.776501    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:00:07.819432    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:00:07.843212    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:00:07.874148    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.881328    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.898954    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:00:07.924334    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:00:07.960638    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:00:07.993144    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.003471    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.015830    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:00:08.037363    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:00:08.070929    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:00:08.105068    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.111642    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.126173    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:00:08.148720    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:00:08.183227    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:00:08.190326    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:00:08.190408    8140 kubeadm.go:391] StartCluster: {Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:00:08.198993    8140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:00:08.234029    8140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 04:00:08.267001    8140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:00:08.297749    8140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:00:08.317812    8140 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:00:08.317878    8140 kubeadm.go:156] found existing configuration files:
	
	I0520 04:00:08.333498    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 04:00:08.353247    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:00:08.365878    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:00:08.397069    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 04:00:08.415532    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:00:08.428140    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:00:08.463944    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 04:00:08.481906    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:00:08.496825    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:00:08.530869    8140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 04:00:08.549806    8140 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:00:08.565019    8140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:00:08.582045    8140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:00:09.037291    8140 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:00:23.832184    8140 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 04:00:23.832184    8140 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:00:23.832184    8140 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:00:23.833726    8140 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:00:23.833917    8140 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:00:23.834144    8140 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:00:23.836933    8140 out.go:204]   - Generating certificates and keys ...
	I0520 04:00:23.837148    8140 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:00:23.837269    8140 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:00:23.837461    8140 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 04:00:23.837647    8140 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 04:00:23.838194    8140 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-291700 localhost] and IPs [172.25.246.119 127.0.0.1 ::1]
	I0520 04:00:23.838372    8140 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 04:00:23.838664    8140 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-291700 localhost] and IPs [172.25.246.119 127.0.0.1 ::1]
	I0520 04:00:23.838830    8140 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 04:00:23.838964    8140 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 04:00:23.839085    8140 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 04:00:23.839260    8140 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:00:23.839399    8140 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:00:23.839927    8140 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:00:23.840117    8140 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:00:23.840249    8140 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:00:23.842925    8140 out.go:204]   - Booting up control plane ...
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:00:23.843708    8140 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:00:23.844417    8140 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:00:23.844622    8140 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:00:23.844673    8140 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 04:00:23.844673    8140 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 04:00:23.845207    8140 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003291694s
	I0520 04:00:23.845400    8140 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 04:00:23.845400    8140 kubeadm.go:309] [api-check] The API server is healthy after 9.070452796s
	I0520 04:00:23.845764    8140 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:00:23.845764    8140 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:00:23.845764    8140 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:00:23.845764    8140 kubeadm.go:309] [mark-control-plane] Marking the node ha-291700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:00:23.845764    8140 kubeadm.go:309] [bootstrap-token] Using token: xb4118.ouebrb3avn5afcax
	I0520 04:00:23.850647    8140 out.go:204]   - Configuring RBAC rules ...
	I0520 04:00:23.851759    8140 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:00:23.851869    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:00:23.851869    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:00:23.852447    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:00:23.852528    8140 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:00:23.852528    8140 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:00:23.853058    8140 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:00:23.853130    8140 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:00:23.853130    8140 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:00:23.853130    8140 kubeadm.go:309] 
	I0520 04:00:23.853130    8140 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:00:23.853130    8140 kubeadm.go:309] 
	I0520 04:00:23.853687    8140 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:00:23.853687    8140 kubeadm.go:309] 
	I0520 04:00:23.853825    8140 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:00:23.853825    8140 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:00:23.853825    8140 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:00:23.853825    8140 kubeadm.go:309] 
	I0520 04:00:23.853825    8140 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:00:23.854407    8140 kubeadm.go:309] 
	I0520 04:00:23.854491    8140 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:00:23.854491    8140 kubeadm.go:309] 
	I0520 04:00:23.854491    8140 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:00:23.854491    8140 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:00:23.854491    8140 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:00:23.855022    8140 kubeadm.go:309] 
	I0520 04:00:23.855056    8140 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:00:23.855056    8140 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:00:23.855056    8140 kubeadm.go:309] 
	I0520 04:00:23.855056    8140 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xb4118.ouebrb3avn5afcax \
	I0520 04:00:23.855764    8140 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 04:00:23.855764    8140 kubeadm.go:309] 	--control-plane 
	I0520 04:00:23.855764    8140 kubeadm.go:309] 
	I0520 04:00:23.855764    8140 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:00:23.855764    8140 kubeadm.go:309] 
	I0520 04:00:23.855764    8140 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xb4118.ouebrb3avn5afcax \
	I0520 04:00:23.856417    8140 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 04:00:23.856417    8140 cni.go:84] Creating CNI manager for ""
	I0520 04:00:23.856417    8140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:00:23.858696    8140 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 04:00:23.872570    8140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 04:00:23.883857    8140 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 04:00:23.883857    8140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 04:00:23.942208    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 04:00:24.695137    8140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:00:24.710688    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700 minikube.k8s.io/updated_at=2024_05_20T04_00_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=true
	I0520 04:00:24.710688    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:24.727002    8140 ops.go:34] apiserver oom_adj: -16
	I0520 04:00:24.925393    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:25.426637    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:25.930407    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:26.429614    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:26.931808    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:27.432184    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:27.934080    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:28.436973    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:28.937050    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:29.442179    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:29.925817    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:30.430734    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:30.929992    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:31.432975    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:31.928233    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:32.437930    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:32.943134    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:33.439850    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:33.937116    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:34.438993    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:34.924426    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:35.433440    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:35.937339    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:00:36.071204    8140 kubeadm.go:1107] duration metric: took 11.3760521s to wait for elevateKubeSystemPrivileges
	W0520 04:00:36.071324    8140 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:00:36.071324    8140 kubeadm.go:393] duration metric: took 27.8808785s to StartCluster
	I0520 04:00:36.071324    8140 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:36.071617    8140 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:00:36.073188    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:00:36.075011    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 04:00:36.075114    8140 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:00:36.075240    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:00:36.075114    8140 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:00:36.075240    8140 addons.go:69] Setting storage-provisioner=true in profile "ha-291700"
	I0520 04:00:36.075394    8140 addons.go:69] Setting default-storageclass=true in profile "ha-291700"
	I0520 04:00:36.075394    8140 addons.go:234] Setting addon storage-provisioner=true in "ha-291700"
	I0520 04:00:36.075517    8140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-291700"
	I0520 04:00:36.075663    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:00:36.075663    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:00:36.076661    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:36.077238    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:36.227472    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 04:00:36.589410    8140 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 04:00:38.468419    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:38.468467    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:38.471189    8140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:00:38.473197    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:38.473197    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:38.473197    8140 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:00:38.473197    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:00:38.474190    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:38.474190    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:00:38.475189    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:00:38.476194    8140 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 04:00:38.476194    8140 addons.go:234] Setting addon default-storageclass=true in "ha-291700"
	I0520 04:00:38.477198    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:00:38.478192    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:40.894867    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:40.895902    8140 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:00:40.895902    8140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:00:40.895902    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:00:43.299917    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:00:43.300255    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:43.300333    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:00:43.793212    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:00:43.793702    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:43.793854    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:00:43.940931    8140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:00:46.051262    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:00:46.051314    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:46.051314    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:00:46.196808    8140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:00:46.352490    8140 round_trippers.go:463] GET https://172.25.255.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 04:00:46.352490    8140 round_trippers.go:469] Request Headers:
	I0520 04:00:46.352490    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:00:46.352490    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:00:46.369833    8140 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 04:00:46.370860    8140 round_trippers.go:463] PUT https://172.25.255.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 04:00:46.370860    8140 round_trippers.go:469] Request Headers:
	I0520 04:00:46.370860    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:00:46.370860    8140 round_trippers.go:473]     Content-Type: application/json
	I0520 04:00:46.370860    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:00:46.374809    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:00:46.378410    8140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 04:00:46.382098    8140 addons.go:505] duration metric: took 10.3059651s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 04:00:46.382098    8140 start.go:245] waiting for cluster config update ...
	I0520 04:00:46.382098    8140 start.go:254] writing updated cluster config ...
	I0520 04:00:46.385158    8140 out.go:177] 
	I0520 04:00:46.394090    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:00:46.394090    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:00:46.402100    8140 out.go:177] * Starting "ha-291700-m02" control-plane node in "ha-291700" cluster
	I0520 04:00:46.404094    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:00:46.404094    8140 cache.go:56] Caching tarball of preloaded images
	I0520 04:00:46.405100    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:00:46.405100    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:00:46.405100    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:00:46.408113    8140 start.go:360] acquireMachinesLock for ha-291700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:00:46.408113    8140 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-291700-m02"
	I0520 04:00:46.408113    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:00:46.408113    8140 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 04:00:46.411093    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:00:46.411093    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 04:00:46.411093    8140 client.go:168] LocalClient.Create starting
	I0520 04:00:46.411093    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:00:46.412095    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:48.400489    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:00:50.218640    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:00:50.218727    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:50.218803    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:00:51.723699    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:00:51.724168    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:51.724168    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:00:55.451444    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:00:55.451541    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:55.454081    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:00:55.922259    8140 main.go:141] libmachine: Creating SSH key...
	I0520 04:00:56.005523    8140 main.go:141] libmachine: Creating VM...
	I0520 04:00:56.005523    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:00:58.991996    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:00:58.992335    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:00:58.992404    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:00:58.992465    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:01:00.816037    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:01:00.816037    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:00.816037    8140 main.go:141] libmachine: Creating VHD
	I0520 04:01:00.816895    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:01:04.722880    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D75076A2-73A0-410E-9D70-05A9600AE588
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:01:04.723590    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:04.723590    8140 main.go:141] libmachine: Writing magic tar header
	I0520 04:01:04.723590    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:01:04.737455    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:01:07.983005    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:07.983005    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:07.983621    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd' -SizeBytes 20000MB
	I0520 04:01:10.589860    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:10.589860    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:10.590150    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-291700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:14.378563    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700-m02 -DynamicMemoryEnabled $false
	I0520 04:01:16.775152    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:16.775152    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:16.775573    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700-m02 -Count 2
	I0520 04:01:19.101810    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:19.101810    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:19.102891    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\boot2docker.iso'
	I0520 04:01:21.773822    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:21.773822    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:21.774389    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\disk.vhd'
	I0520 04:01:24.598758    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:24.598758    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:24.598758    8140 main.go:141] libmachine: Starting VM...
	I0520 04:01:24.599429    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700-m02
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:27.795590    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 04:01:27.795590    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:30.209918    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:30.209918    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:30.210853    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:32.907719    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:32.907719    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:33.918033    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:36.296883    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:36.296883    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:36.297015    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:39.010538    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:39.010764    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:40.018298    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:42.354914    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:42.354914    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:42.355423    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:45.014991    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:45.015817    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:46.021977    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:48.331696    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:50.987847    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:01:50.987847    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:51.994420    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:54.348091    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:57.013792    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:01:59.253425    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:01:59.253466    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:01:59.253560    8140 machine.go:94] provisionDockerMachine start ...
	I0520 04:01:59.253560    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:01.520560    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:01.520560    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:01.521474    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:04.178773    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:04.178773    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:04.185698    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:04.195726    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:04.195726    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:02:04.331212    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:02:04.331212    8140 buildroot.go:166] provisioning hostname "ha-291700-m02"
	I0520 04:02:04.331342    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:06.566253    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:09.237358    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:09.238011    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:09.243686    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:09.244360    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:09.244360    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700-m02 && echo "ha-291700-m02" | sudo tee /etc/hostname
	I0520 04:02:09.412095    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700-m02
	
	I0520 04:02:09.412203    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:11.636662    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:11.637663    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:11.637663    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:14.302806    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:14.302806    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:14.310080    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:14.310901    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:14.310901    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:02:14.471496    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:02:14.471496    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 04:02:14.472043    8140 buildroot.go:174] setting up certificates
	I0520 04:02:14.472043    8140 provision.go:84] configureAuth start
	I0520 04:02:14.472136    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:16.693716    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:16.693783    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:16.693840    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:19.355751    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:19.355814    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:19.355814    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:21.595264    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:24.228167    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:24.228787    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:24.228787    8140 provision.go:143] copyHostCerts
	I0520 04:02:24.228997    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 04:02:24.229311    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 04:02:24.229311    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 04:02:24.229463    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 04:02:24.230703    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 04:02:24.230998    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 04:02:24.230998    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 04:02:24.230998    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 04:02:24.232316    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 04:02:24.232534    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 04:02:24.232534    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 04:02:24.233037    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 04:02:24.233935    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700-m02 san=[127.0.0.1 172.25.251.208 ha-291700-m02 localhost minikube]
	I0520 04:02:24.392333    8140 provision.go:177] copyRemoteCerts
	I0520 04:02:24.408286    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:02:24.408286    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:26.659100    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:26.659281    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:26.659389    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:29.377658    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:29.377658    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:29.377658    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:02:29.484046    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.075681s)
	I0520 04:02:29.484046    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 04:02:29.484752    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 04:02:29.532060    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 04:02:29.532185    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:02:29.579485    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 04:02:29.580136    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:02:29.625438    8140 provision.go:87] duration metric: took 15.1533724s to configureAuth
	I0520 04:02:29.625498    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:02:29.626045    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:02:29.626140    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:31.851742    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:34.536144    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:34.537162    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:34.543649    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:34.544235    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:34.544388    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:02:34.687099    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:02:34.687188    8140 buildroot.go:70] root file system type: tmpfs
	I0520 04:02:34.687386    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:02:34.687466    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:36.937580    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:36.937580    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:36.938331    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:39.610719    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:39.610719    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:39.616566    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:39.617333    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:39.617333    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.246.119"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:02:39.779578    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.246.119
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:02:39.779747    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:41.983059    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:41.983434    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:41.983561    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:44.602661    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:44.602661    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:44.609013    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:02:44.609809    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:02:44.609809    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:02:46.759757    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:02:46.759830    8140 machine.go:97] duration metric: took 47.5062001s to provisionDockerMachine
	I0520 04:02:46.759887    8140 client.go:171] duration metric: took 2m0.3486219s to LocalClient.Create
	I0520 04:02:46.759968    8140 start.go:167] duration metric: took 2m0.3487027s to libmachine.API.Create "ha-291700"
	I0520 04:02:46.760024    8140 start.go:293] postStartSetup for "ha-291700-m02" (driver="hyperv")
	I0520 04:02:46.760063    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:02:46.776172    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:02:46.776172    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:48.990546    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:51.649439    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:51.649439    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:51.649555    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:02:51.754923    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.978664s)
	I0520 04:02:51.769718    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:02:51.776835    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 04:02:51.776835    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 04:02:51.777650    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 04:02:51.778175    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 04:02:51.778175    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 04:02:51.792174    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:02:51.811370    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 04:02:51.864157    8140 start.go:296] duration metric: took 5.1040861s for postStartSetup
	I0520 04:02:51.867618    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:54.064146    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:54.064146    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:54.064777    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:02:56.712779    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:02:56.712779    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:56.712779    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:02:56.716179    8140 start.go:128] duration metric: took 2m10.3078801s to createHost
	I0520 04:02:56.716179    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:02:58.925340    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:02:58.925554    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:02:58.925627    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:01.569119    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:01.569119    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:01.576085    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:03:01.576085    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:03:01.576085    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:03:01.706791    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202981.697667814
	
	I0520 04:03:01.706791    8140 fix.go:216] guest clock: 1716202981.697667814
	I0520 04:03:01.706791    8140 fix.go:229] Guest: 2024-05-20 04:03:01.697667814 -0700 PDT Remote: 2024-05-20 04:02:56.7161798 -0700 PDT m=+345.131732601 (delta=4.981488014s)
	I0520 04:03:01.706791    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:03.885551    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:03.885551    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:03.886746    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:06.520617    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:06.520671    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:06.525944    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:03:06.526698    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.208 22 <nil> <nil>}
	I0520 04:03:06.526698    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716202981
	I0520 04:03:06.670776    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 11:03:01 UTC 2024
	
	I0520 04:03:06.670776    8140 fix.go:236] clock set: Mon May 20 11:03:01 UTC 2024
	 (err=<nil>)
	I0520 04:03:06.670776    8140 start.go:83] releasing machines lock for "ha-291700-m02", held for 2m20.2624615s
	I0520 04:03:06.670776    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:08.899283    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:11.550196    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:11.550196    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:11.553206    8140 out.go:177] * Found network options:
	I0520 04:03:11.556114    8140 out.go:177]   - NO_PROXY=172.25.246.119
	W0520 04:03:11.558562    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:03:11.559999    8140 out.go:177]   - NO_PROXY=172.25.246.119
	W0520 04:03:11.562645    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:03:11.563996    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:03:11.566991    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:03:11.566991    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:11.576991    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 04:03:11.576991    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m02 ).state
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:13.880281    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:13.880960    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:13.881098    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:13.881199    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:16.693059    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:16.693059    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:16.693405    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:03:16.719097    8140 main.go:141] libmachine: [stdout =====>] : 172.25.251.208
	
	I0520 04:03:16.719097    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:16.719758    8140 sshutil.go:53] new ssh client: &{IP:172.25.251.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m02\id_rsa Username:docker}
	I0520 04:03:16.851569    8140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2745694s)
	I0520 04:03:16.851569    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2845691s)
	W0520 04:03:16.851569    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:03:16.864755    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 04:03:16.898939    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:03:16.899021    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:03:16.899325    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:03:16.946858    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 04:03:16.978301    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:03:16.997587    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:03:17.012140    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:03:17.046017    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:03:17.083278    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:03:17.116264    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:03:17.148565    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:03:17.182636    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:03:17.216492    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:03:17.249667    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:03:17.283054    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:03:17.313077    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:03:17.344656    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:17.549393    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:03:17.583807    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:03:17.599119    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:03:17.635086    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:03:17.669529    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:03:17.714513    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:03:17.751967    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:03:17.788129    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:03:17.851138    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:03:17.875620    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:03:17.922853    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:03:17.941153    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:03:17.959159    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:03:18.003718    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:03:18.212248    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:03:18.407987    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:03:18.407987    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:03:18.464695    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:18.669456    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:03:21.216395    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5469341s)
	I0520 04:03:21.230270    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:03:21.271662    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:03:21.307537    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:03:21.507281    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:03:21.716883    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:21.911393    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:03:21.957509    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:03:21.996695    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:22.194239    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:03:22.309128    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:03:22.322016    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:03:22.330698    8140 start.go:562] Will wait 60s for crictl version
	I0520 04:03:22.342433    8140 ssh_runner.go:195] Run: which crictl
	I0520 04:03:22.361547    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:03:22.424880    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 04:03:22.438711    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:03:22.488929    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:03:22.522926    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 04:03:22.526610    8140 out.go:177]   - env NO_PROXY=172.25.246.119
	I0520 04:03:22.528781    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 04:03:22.533144    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 04:03:22.536005    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 04:03:22.536005    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 04:03:22.550115    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 04:03:22.556089    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:03:22.577499    8140 mustload.go:65] Loading cluster: ha-291700
	I0520 04:03:22.577499    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:03:22.578644    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:24.818680    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:24.818680    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:24.818680    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:03:24.819327    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.251.208
	I0520 04:03:24.819327    8140 certs.go:194] generating shared ca certs ...
	I0520 04:03:24.819327    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:24.820044    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:03:24.820231    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:03:24.820231    8140 certs.go:256] generating profile certs ...
	I0520 04:03:24.821033    8140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:03:24.821816    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5
	I0520 04:03:24.821816    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.251.208 172.25.255.254]
	I0520 04:03:25.170504    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 ...
	I0520 04:03:25.171502    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5: {Name:mk85482bea0486d2a9770aad77782ccb41e9e5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:25.172453    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5 ...
	I0520 04:03:25.172453    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5: {Name:mkf153b9fb4974203d1d3ed68ef74d40bc1c5df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:03:25.173162    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.d9d5fcf5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:03:25.187234    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.d9d5fcf5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:03:25.188233    8140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:03:25.188233    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:03:25.188803    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:03:25.189091    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:03:25.189239    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:03:25.189418    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:03:25.189611    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:03:25.189821    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:03:25.189821    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:03:25.190229    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:03:25.190229    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:03:25.191247    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:25.192255    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:03:25.192255    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:27.487624    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:27.487729    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:27.487839    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:30.216359    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:03:30.216450    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:30.216634    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:03:30.319215    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 04:03:30.331917    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 04:03:30.371914    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 04:03:30.379303    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 04:03:30.418096    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 04:03:30.426241    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 04:03:30.467470    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 04:03:30.475146    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 04:03:30.512432    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 04:03:30.520853    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 04:03:30.555306    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 04:03:30.562074    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 04:03:30.583314    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:03:30.653138    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:03:30.718123    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:03:30.764102    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:03:30.807772    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 04:03:30.854255    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:03:30.901978    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:03:30.953423    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:03:30.999512    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:03:31.044040    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:03:31.102591    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:03:31.145037    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 04:03:31.183744    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 04:03:31.216348    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 04:03:31.251884    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 04:03:31.287349    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 04:03:31.319240    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 04:03:31.348796    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 04:03:31.400708    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:03:31.421599    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:03:31.452365    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.461518    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.474429    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:03:31.494737    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:03:31.533727    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:03:31.566072    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.572921    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.584218    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:03:31.605374    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:03:31.639396    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:03:31.669630    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.676540    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.691161    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:03:31.712884    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:03:31.745913    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:03:31.752893    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:03:31.753213    8140 kubeadm.go:928] updating node {m02 172.25.251.208 8443 v1.30.1 docker true true} ...
	I0520 04:03:31.753489    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.251.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:03:31.753515    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:03:31.766356    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:03:31.791451    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:03:31.792480    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:03:31.805801    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:03:31.821924    8140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 04:03:31.835822    8140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 04:03:31.860122    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0520 04:03:31.860191    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0520 04:03:31.860191    8140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0520 04:03:32.985976    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:03:32.998032    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:03:33.005502    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 04:03:33.005502    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 04:03:33.158163    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:03:33.171657    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:03:33.218379    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 04:03:33.218379    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 04:03:33.449943    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:03:33.507238    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:03:33.520318    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:03:33.553855    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 04:03:33.553855    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 04:03:34.234455    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 04:03:34.253562    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0520 04:03:34.298521    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:03:34.330740    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 04:03:34.382401    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:03:34.389577    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:03:34.428743    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:03:34.632047    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:03:34.666189    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:03:34.666970    8140 start.go:316] joinCluster: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:03:34.667024    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 04:03:34.667024    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:03:36.871474    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:03:36.871474    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:36.871989    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:03:39.515806    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:03:39.515883    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:03:39.516037    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:03:39.706353    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0392501s)
	I0520 04:03:39.706474    8140 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:03:39.707980    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o1rrbw.w0v2ukl5tfk9vfwn --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m02 --control-plane --apiserver-advertise-address=172.25.251.208 --apiserver-bind-port=8443"
	I0520 04:04:23.481489    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o1rrbw.w0v2ukl5tfk9vfwn --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m02 --control-plane --apiserver-advertise-address=172.25.251.208 --apiserver-bind-port=8443": (43.7733839s)
	I0520 04:04:23.481489    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 04:04:24.315645    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700-m02 minikube.k8s.io/updated_at=2024_05_20T04_04_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=false
	I0520 04:04:24.509390    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-291700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 04:04:24.676390    8140 start.go:318] duration metric: took 50.00934s to joinCluster
	I0520 04:04:24.676638    8140 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:04:24.682745    8140 out.go:177] * Verifying Kubernetes components...
	I0520 04:04:24.677900    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:04:24.697768    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:04:25.062692    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:04:25.089727    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:04:25.090275    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 04:04:25.090275    8140 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.255.254:8443 with https://172.25.246.119:8443
	I0520 04:04:25.091284    8140 node_ready.go:35] waiting up to 6m0s for node "ha-291700-m02" to be "Ready" ...
	I0520 04:04:25.091284    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:25.091284    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:25.091284    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:25.091284    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:25.105435    8140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0520 04:04:25.601447    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:25.601447    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:25.601447    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:25.601447    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:25.608056    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:26.106471    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:26.106543    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:26.106543    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:26.106543    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:26.112279    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:26.594514    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:26.594594    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:26.594594    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:26.594594    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:26.602860    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:27.101643    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:27.101643    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:27.101643    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:27.101643    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:27.108278    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:27.109928    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:27.594572    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:27.594654    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:27.594746    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:27.594746    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:27.600627    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:28.102604    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:28.102604    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:28.102604    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:28.102604    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:28.119566    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:04:28.597285    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:28.597285    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:28.597285    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:28.597285    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:28.603041    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:29.094635    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:29.094635    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:29.094758    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:29.094758    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:29.100345    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:29.601569    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:29.601569    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:29.601764    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:29.601764    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:29.609936    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:29.609936    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:30.093110    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:30.093110    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:30.093172    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:30.093172    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:30.097670    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:30.605493    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:30.605493    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:30.605699    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:30.605699    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:30.859146    8140 round_trippers.go:574] Response Status: 200 OK in 253 milliseconds
	I0520 04:04:31.105400    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:31.105400    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:31.105400    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:31.105400    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:31.150966    8140 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0520 04:04:31.598375    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:31.598684    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:31.598684    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:31.598684    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:31.604392    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:32.102158    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:32.102433    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:32.102433    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:32.102433    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:32.107801    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:32.109140    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:32.605536    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:32.605536    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:32.605536    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:32.605536    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:32.610630    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:33.099028    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:33.099028    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:33.099028    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:33.099028    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:33.105439    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:33.600063    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:33.600209    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:33.600209    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:33.600209    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:33.607139    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:34.101635    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:34.101635    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:34.101635    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:34.101635    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:34.108200    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:34.602487    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:34.602699    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:34.602699    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:34.602699    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:34.608525    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:34.609523    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:35.092398    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:35.092478    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:35.092478    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:35.092478    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:35.098264    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:35.602514    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:35.602703    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:35.602703    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:35.602703    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:35.610464    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:36.096793    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:36.096978    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:36.096978    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:36.096978    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:36.102216    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:36.595072    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:36.595116    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:36.595116    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:36.595116    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:36.607895    8140 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 04:04:36.609605    8140 node_ready.go:53] node "ha-291700-m02" has status "Ready":"False"
	I0520 04:04:37.104930    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.104930    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.105195    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.105195    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.110498    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:37.595116    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.595230    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.595230    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.595359    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.599730    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.601185    8140 node_ready.go:49] node "ha-291700-m02" has status "Ready":"True"
	I0520 04:04:37.601243    8140 node_ready.go:38] duration metric: took 12.509881s for node "ha-291700-m02" to be "Ready" ...
	I0520 04:04:37.601243    8140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:04:37.601372    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:37.601442    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.601442    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.601466    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.611237    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:04:37.620211    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.621213    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hczp
	I0520 04:04:37.621213    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.621213    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.621213    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.627227    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:37.628245    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.628245    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.628245    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.628245    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.632261    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.632261    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.633227    8140 pod_ready.go:81] duration metric: took 13.0159ms for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.633227    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.633227    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gglsg
	I0520 04:04:37.633227    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.633227    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.633227    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.637226    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:37.638427    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.638639    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.638639    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.638666    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.644902    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:37.645692    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.645692    8140 pod_ready.go:81] duration metric: took 12.465ms for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.645692    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.645692    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700
	I0520 04:04:37.645692    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.645692    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.645692    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.649507    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:37.650505    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:37.650505    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.650505    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.650505    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.654505    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.654505    8140 pod_ready.go:92] pod "etcd-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.654505    8140 pod_ready.go:81] duration metric: took 8.8135ms for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.654505    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.654505    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m02
	I0520 04:04:37.654505    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.654505    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.654505    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.659509    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:37.660510    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:37.660510    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.660510    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.660510    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.664550    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:37.665207    8140 pod_ready.go:92] pod "etcd-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:37.665207    8140 pod_ready.go:81] duration metric: took 10.7018ms for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.665267    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:37.800661    8140 request.go:629] Waited for 135.3935ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:04:37.800661    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:04:37.800661    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:37.800661    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:37.800661    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:37.806562    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:38.005031    8140 request.go:629] Waited for 197.2291ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.005232    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.005232    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.005232    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.005232    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.011805    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:38.012751    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.012751    8140 pod_ready.go:81] duration metric: took 347.4829ms for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.012751    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.209956    8140 request.go:629] Waited for 196.8945ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:04:38.209956    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:04:38.209956    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.209956    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.209956    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.216638    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:38.399243    8140 request.go:629] Waited for 180.2564ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:38.399440    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:38.399440    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.399440    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.399440    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.404038    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:38.405368    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.405422    8140 pod_ready.go:81] duration metric: took 392.6708ms for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.405475    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.601879    8140 request.go:629] Waited for 196.3366ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:04:38.602173    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:04:38.602173    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.602173    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.602173    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.609750    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:38.808623    8140 request.go:629] Waited for 197.9781ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.808894    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:38.808969    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.808969    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.809019    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:38.813338    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:38.814321    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:38.814321    8140 pod_ready.go:81] duration metric: took 408.8456ms for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.814321    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:38.997286    8140 request.go:629] Waited for 181.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:04:38.997351    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:04:38.997351    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:38.997351    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:38.997351    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.001934    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:39.201087    8140 request.go:629] Waited for 196.8246ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.201308    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.201308    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.201308    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.201308    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.204916    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:04:39.205848    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:39.205848    8140 pod_ready.go:81] duration metric: took 391.5269ms for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.205848    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.402173    8140 request.go:629] Waited for 195.354ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:04:39.402376    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:04:39.402376    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.402376    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.402376    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.412330    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:04:39.606195    8140 request.go:629] Waited for 192.7183ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.606277    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:39.606277    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.606277    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.606277    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.611919    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:39.613076    8140 pod_ready.go:92] pod "kube-proxy-94csf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:39.613076    8140 pod_ready.go:81] duration metric: took 407.2264ms for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.613076    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:39.808628    8140 request.go:629] Waited for 195.5524ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:04:39.808865    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:04:39.808865    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:39.808967    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:39.808993    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:39.817046    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:40.011007    8140 request.go:629] Waited for 192.7305ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.011244    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.011244    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.011244    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.011244    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.016066    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.017502    8140 pod_ready.go:92] pod "kube-proxy-xq4tv" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.017673    8140 pod_ready.go:81] duration metric: took 404.5964ms for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.017673    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.197148    8140 request.go:629] Waited for 179.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:04:40.197339    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:04:40.197433    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.197433    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.197433    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.202745    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.399873    8140 request.go:629] Waited for 196.6544ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.399971    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:04:40.400116    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.400116    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.400116    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.404941    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:04:40.405734    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.405734    8140 pod_ready.go:81] duration metric: took 388.0611ms for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.405734    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.602482    8140 request.go:629] Waited for 196.7478ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:04:40.602901    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:04:40.602901    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.602901    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.602901    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.608947    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:04:40.807185    8140 request.go:629] Waited for 197.0206ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:40.807185    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:04:40.807185    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.807185    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.807185    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.812862    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:40.814574    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:04:40.814574    8140 pod_ready.go:81] duration metric: took 408.839ms for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:04:40.814574    8140 pod_ready.go:38] duration metric: took 3.213269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:04:40.814636    8140 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:04:40.826469    8140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:04:40.857187    8140 api_server.go:72] duration metric: took 16.1804815s to wait for apiserver process to appear ...
	I0520 04:04:40.857187    8140 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:04:40.857187    8140 api_server.go:253] Checking apiserver healthz at https://172.25.246.119:8443/healthz ...
	I0520 04:04:40.866236    8140 api_server.go:279] https://172.25.246.119:8443/healthz returned 200:
	ok
	I0520 04:04:40.866751    8140 round_trippers.go:463] GET https://172.25.246.119:8443/version
	I0520 04:04:40.866847    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:40.866890    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:40.866890    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:40.868009    8140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 04:04:40.868971    8140 api_server.go:141] control plane version: v1.30.1
	I0520 04:04:40.869032    8140 api_server.go:131] duration metric: took 11.845ms to wait for apiserver health ...
	I0520 04:04:40.869032    8140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 04:04:41.011085    8140 request.go:629] Waited for 141.8285ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.011168    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.011168    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.011274    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.011274    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.019389    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:04:41.026583    8140 system_pods.go:59] 17 kube-system pods found
	I0520 04:04:41.026583    8140 system_pods.go:61] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:04:41.026583    8140 system_pods.go:61] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:04:41.026715    8140 system_pods.go:61] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:04:41.026715    8140 system_pods.go:74] duration metric: took 157.6823ms to wait for pod list to return data ...
	I0520 04:04:41.026715    8140 default_sa.go:34] waiting for default service account to be created ...
	I0520 04:04:41.200270    8140 request.go:629] Waited for 173.3681ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:04:41.200403    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:04:41.200403    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.200403    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.200403    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.206311    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:04:41.206792    8140 default_sa.go:45] found service account: "default"
	I0520 04:04:41.206792    8140 default_sa.go:55] duration metric: took 180.0774ms for default service account to be created ...
	I0520 04:04:41.206850    8140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 04:04:41.403835    8140 request.go:629] Waited for 196.6805ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.403939    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:04:41.403939    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.403939    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.403939    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.411570    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:41.419192    8140 system_pods.go:86] 17 kube-system pods found
	I0520 04:04:41.419268    8140 system_pods.go:89] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:04:41.419268    8140 system_pods.go:89] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:04:41.419371    8140 system_pods.go:89] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:04:41.419459    8140 system_pods.go:89] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:04:41.419556    8140 system_pods.go:126] duration metric: took 212.6087ms to wait for k8s-apps to be running ...
	I0520 04:04:41.419556    8140 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 04:04:41.430689    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:04:41.457656    8140 system_svc.go:56] duration metric: took 38.0998ms WaitForService to wait for kubelet
	I0520 04:04:41.457656    8140 kubeadm.go:576] duration metric: took 16.7809492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:04:41.457656    8140 node_conditions.go:102] verifying NodePressure condition ...
	I0520 04:04:41.605339    8140 request.go:629] Waited for 147.4792ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes
	I0520 04:04:41.605446    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes
	I0520 04:04:41.605553    8140 round_trippers.go:469] Request Headers:
	I0520 04:04:41.605617    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:04:41.605617    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:04:41.613028    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:04:41.614391    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:04:41.614478    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:04:41.614478    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:04:41.614478    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:04:41.614478    8140 node_conditions.go:105] duration metric: took 156.8215ms to run NodePressure ...
	I0520 04:04:41.614478    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:04:41.614553    8140 start.go:254] writing updated cluster config ...
	I0520 04:04:41.618050    8140 out.go:177] 
	I0520 04:04:41.630301    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:04:41.630301    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:04:41.636963    8140 out.go:177] * Starting "ha-291700-m03" control-plane node in "ha-291700" cluster
	I0520 04:04:41.639281    8140 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:04:41.639472    8140 cache.go:56] Caching tarball of preloaded images
	I0520 04:04:41.639472    8140 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:04:41.640004    8140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:04:41.640264    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:04:41.645064    8140 start.go:360] acquireMachinesLock for ha-291700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:04:41.645349    8140 start.go:364] duration metric: took 285.7µs to acquireMachinesLock for "ha-291700-m03"
	I0520 04:04:41.645535    8140 start.go:93] Provisioning new machine with config: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:04:41.645569    8140 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0520 04:04:41.647884    8140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:04:41.647884    8140 start.go:159] libmachine.API.Create for "ha-291700" (driver="hyperv")
	I0520 04:04:41.648677    8140 client.go:168] LocalClient.Create starting
	I0520 04:04:41.648824    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:04:41.648824    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:04:41.649399    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:04:41.649532    8140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:04:41.649760    8140 main.go:141] libmachine: Decoding PEM data...
	I0520 04:04:41.649760    8140 main.go:141] libmachine: Parsing certificate...
	I0520 04:04:41.649960    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:04:43.635145    8140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:04:43.636170    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:43.636257    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:04:45.459649    8140 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:04:45.459649    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:45.459744    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:04:47.035628    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:04:47.035628    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:47.036308    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:04:51.020446    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:04:51.021379    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:51.023579    8140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:04:51.448241    8140 main.go:141] libmachine: Creating SSH key...
	I0520 04:04:51.599957    8140 main.go:141] libmachine: Creating VM...
	I0520 04:04:51.599957    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:04:54.783181    8140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:04:54.784132    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:54.784298    8140 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:04:54.784298    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:04:56.647658    8140 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:04:56.648359    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:04:56.648359    8140 main.go:141] libmachine: Creating VHD
	I0520 04:04:56.648359    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:05:00.602064    8140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D69B95F6-287C-4368-9338-09435B916E07
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:05:00.602064    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:00.602064    8140 main.go:141] libmachine: Writing magic tar header
	I0520 04:05:00.602064    8140 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:05:00.614017    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:05:03.937827    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:03.937827    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:03.938251    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd' -SizeBytes 20000MB
	I0520 04:05:06.602045    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:06.602946    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:06.603031    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-291700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:10.482588    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-291700-m03 -DynamicMemoryEnabled $false
	I0520 04:05:12.867816    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:12.867816    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:12.867965    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-291700-m03 -Count 2
	I0520 04:05:15.197610    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:15.197610    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:15.197749    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\boot2docker.iso'
	I0520 04:05:17.984346    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:17.984346    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:17.984900    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-291700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\disk.vhd'
	I0520 04:05:20.873036    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:20.873441    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:20.873480    8140 main.go:141] libmachine: Starting VM...
	I0520 04:05:20.873556    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-291700-m03
	I0520 04:05:24.107797    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:24.108487    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:24.108487    8140 main.go:141] libmachine: Waiting for host to start...
	I0520 04:05:24.108633    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:26.592401    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:29.349788    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:29.349788    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:30.365589    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:32.755998    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:32.756578    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:32.756578    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:35.447089    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:35.447089    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:36.452879    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:38.811886    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:41.481105    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:41.481914    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:42.494860    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:44.819328    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:47.541923    8140 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:05:47.541982    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:48.544515    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:50.888132    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:50.888347    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:50.888347    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:05:53.613931    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:05:53.614795    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:53.614954    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:55.870785    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:55.871581    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:55.871808    8140 machine.go:94] provisionDockerMachine start ...
	I0520 04:05:55.872049    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:05:58.142888    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:05:58.142888    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:05:58.143026    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:00.870554    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:00.870554    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:00.877622    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:00.878359    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:00.878359    8140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:06:01.017692    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:06:01.017755    8140 buildroot.go:166] provisioning hostname "ha-291700-m03"
	I0520 04:06:01.017820    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:03.284053    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:05.949430    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:05.949826    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:05.960186    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:05.961216    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:05.961216    8140 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-291700-m03 && echo "ha-291700-m03" | sudo tee /etc/hostname
	I0520 04:06:06.138083    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-291700-m03
	
	I0520 04:06:06.138192    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:08.420630    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:08.420630    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:08.421453    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:11.098063    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:11.098063    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:11.103377    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:11.104090    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:11.104090    8140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-291700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-291700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-291700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:06:11.273244    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:06:11.273802    8140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 04:06:11.273802    8140 buildroot.go:174] setting up certificates
	I0520 04:06:11.273802    8140 provision.go:84] configureAuth start
	I0520 04:06:11.273933    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:13.539411    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:13.539411    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:13.539472    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:16.215406    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:16.215406    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:16.215474    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:18.497138    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:21.191760    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:21.192122    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:21.192122    8140 provision.go:143] copyHostCerts
	I0520 04:06:21.192205    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 04:06:21.192730    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 04:06:21.192730    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 04:06:21.193339    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 04:06:21.194098    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 04:06:21.194975    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 04:06:21.195037    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 04:06:21.195585    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 04:06:21.196429    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 04:06:21.196429    8140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 04:06:21.196429    8140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 04:06:21.197242    8140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 04:06:21.198033    8140 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-291700-m03 san=[127.0.0.1 172.25.246.110 ha-291700-m03 localhost minikube]
	I0520 04:06:21.782734    8140 provision.go:177] copyRemoteCerts
	I0520 04:06:21.795820    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:06:21.796899    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:24.060032    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:24.060032    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:24.060151    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:26.729004    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:26.729004    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:26.729486    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:06:26.843778    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0479499s)
	I0520 04:06:26.843778    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 04:06:26.844314    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 04:06:26.892348    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 04:06:26.893036    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:06:26.954915    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 04:06:26.955574    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:06:27.004647    8140 provision.go:87] duration metric: took 15.7308197s to configureAuth
	I0520 04:06:27.004695    8140 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:06:27.005184    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:06:27.005184    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:29.254115    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:31.916021    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:31.916021    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:31.923057    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:31.923600    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:31.923704    8140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:06:32.067687    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:06:32.067771    8140 buildroot.go:70] root file system type: tmpfs
	I0520 04:06:32.067897    8140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:06:32.068043    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:34.327459    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:34.327459    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:34.328088    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:37.023339    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:37.023339    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:37.030458    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:37.030458    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:37.030458    8140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.246.119"
	Environment="NO_PROXY=172.25.246.119,172.25.251.208"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:06:37.208095    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.246.119
	Environment=NO_PROXY=172.25.246.119,172.25.251.208
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:06:37.208170    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:39.458227    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:39.458415    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:39.458415    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:42.145132    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:42.145132    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:42.151659    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:42.151659    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:42.151659    8140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:06:44.386267    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:06:44.386267    8140 machine.go:97] duration metric: took 48.514382s to provisionDockerMachine
	I0520 04:06:44.386267    8140 client.go:171] duration metric: took 2m2.7373937s to LocalClient.Create
	I0520 04:06:44.386267    8140 start.go:167] duration metric: took 2m2.7381864s to libmachine.API.Create "ha-291700"
	I0520 04:06:44.386267    8140 start.go:293] postStartSetup for "ha-291700-m03" (driver="hyperv")
	I0520 04:06:44.386267    8140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:06:44.403499    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:06:44.403499    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:46.662596    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:46.662596    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:46.663368    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:49.351795    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:49.351974    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:49.352313    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:06:49.473961    8140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0704535s)
	I0520 04:06:49.487231    8140 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:06:49.494968    8140 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 04:06:49.494968    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 04:06:49.495310    8140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 04:06:49.496400    8140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 04:06:49.496485    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 04:06:49.515434    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:06:49.537022    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 04:06:49.587024    8140 start.go:296] duration metric: took 5.2007488s for postStartSetup
	I0520 04:06:49.589751    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:51.829484    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:54.516793    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:54.516793    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:54.517181    8140 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\config.json ...
	I0520 04:06:54.519719    8140 start.go:128] duration metric: took 2m12.8739365s to createHost
	I0520 04:06:54.519719    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:06:56.778876    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:06:56.778876    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:56.779108    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:06:59.467237    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:06:59.467237    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:06:59.478959    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:06:59.480025    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:06:59.480025    8140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:06:59.625534    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716203219.629909686
	
	I0520 04:06:59.625655    8140 fix.go:216] guest clock: 1716203219.629909686
	I0520 04:06:59.625655    8140 fix.go:229] Guest: 2024-05-20 04:06:59.629909686 -0700 PDT Remote: 2024-05-20 04:06:54.519719 -0700 PDT m=+582.934892001 (delta=5.110190686s)
	I0520 04:06:59.625751    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:01.925040    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:01.925040    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:01.925936    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:04.600576    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:04.601497    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:04.608186    8140 main.go:141] libmachine: Using SSH client type: native
	I0520 04:07:04.608186    8140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.110 22 <nil> <nil>}
	I0520 04:07:04.608735    8140 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716203219
	I0520 04:07:04.763853    8140 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 11:06:59 UTC 2024
	
	I0520 04:07:04.763937    8140 fix.go:236] clock set: Mon May 20 11:06:59 UTC 2024
	 (err=<nil>)
	I0520 04:07:04.763937    8140 start.go:83] releasing machines lock for "ha-291700-m03", held for 2m23.1183587s
	I0520 04:07:04.765190    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:06.997206    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:06.997302    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:06.997359    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:09.687310    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:09.687310    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:09.693394    8140 out.go:177] * Found network options:
	I0520 04:07:09.696472    8140 out.go:177]   - NO_PROXY=172.25.246.119,172.25.251.208
	W0520 04:07:09.698761    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.698761    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:07:09.700661    8140 out.go:177]   - NO_PROXY=172.25.246.119,172.25.251.208
	W0520 04:07:09.703505    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.703505    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.705510    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 04:07:09.705510    8140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 04:07:09.707515    8140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:07:09.707515    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:09.716395    8140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 04:07:09.716395    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700-m03 ).state
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:12.050046    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:12.050437    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:14.867371    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:14.867442    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:14.867587    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:07:14.896464    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.110
	
	I0520 04:07:14.896542    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:14.896814    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700-m03\id_rsa Username:docker}
	I0520 04:07:14.973713    8140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2573098s)
	W0520 04:07:14.973713    8140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:07:14.987818    8140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 04:07:15.111594    8140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4039692s)
	I0520 04:07:15.111594    8140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:07:15.111688    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:07:15.111924    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:07:15.161634    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 04:07:15.199321    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:07:15.220029    8140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:07:15.233400    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:07:15.269617    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:07:15.306731    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:07:15.340920    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:07:15.375146    8140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:07:15.414974    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:07:15.449603    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:07:15.485061    8140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:07:15.518070    8140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:07:15.551366    8140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:07:15.584237    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:15.799409    8140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:07:15.837886    8140 start.go:494] detecting cgroup driver to use...
	I0520 04:07:15.850741    8140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:07:15.889807    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:07:15.929703    8140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:07:15.988696    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:07:16.027206    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:07:16.071417    8140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:07:16.138587    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:07:16.165725    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:07:16.219401    8140 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:07:16.238640    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:07:16.257035    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:07:16.302431    8140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:07:16.528082    8140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:07:16.716621    8140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:07:16.716738    8140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:07:16.770200    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:16.980015    8140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:07:19.515064    8140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5339643s)
	I0520 04:07:19.527630    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:07:19.564655    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:07:19.602943    8140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:07:19.801164    8140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:07:20.004012    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:20.209540    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:07:20.252228    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:07:20.290282    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:20.503636    8140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:07:20.628805    8140 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:07:20.642694    8140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:07:20.651979    8140 start.go:562] Will wait 60s for crictl version
	I0520 04:07:20.665194    8140 ssh_runner.go:195] Run: which crictl
	I0520 04:07:20.687145    8140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:07:20.748980    8140 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 04:07:20.759673    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:07:20.804966    8140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:07:20.842972    8140 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 04:07:20.845198    8140 out.go:177]   - env NO_PROXY=172.25.246.119
	I0520 04:07:20.850008    8140 out.go:177]   - env NO_PROXY=172.25.246.119,172.25.251.208
	I0520 04:07:20.852925    8140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 04:07:20.858306    8140 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 04:07:20.861317    8140 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 04:07:20.861317    8140 ip.go:210] interface addr: 172.25.240.1/20
	I0520 04:07:20.877430    8140 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 04:07:20.884804    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:07:20.908564    8140 mustload.go:65] Loading cluster: ha-291700
	I0520 04:07:20.909514    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:07:20.910237    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:23.163287    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:23.163287    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:23.163287    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:07:23.163897    8140 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700 for IP: 172.25.246.110
	I0520 04:07:23.163897    8140 certs.go:194] generating shared ca certs ...
	I0520 04:07:23.163897    8140 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.164894    8140 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 04:07:23.165256    8140 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 04:07:23.165422    8140 certs.go:256] generating profile certs ...
	I0520 04:07:23.166103    8140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\client.key
	I0520 04:07:23.166103    8140 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3
	I0520 04:07:23.166103    8140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.246.119 172.25.251.208 172.25.246.110 172.25.255.254]
	I0520 04:07:23.462542    8140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 ...
	I0520 04:07:23.462542    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3: {Name:mke17989d921d57f7069f27df1aaa6c3fa0167c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.464635    8140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3 ...
	I0520 04:07:23.464635    8140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3: {Name:mk2883fbcbcb35c3737a67461e3ce0ec6404974d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:07:23.465082    8140 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt.3e70f0f3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt
	I0520 04:07:23.479090    8140 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key.3e70f0f3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key
	I0520 04:07:23.481094    8140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 04:07:23.481094    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 04:07:23.482194    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 04:07:23.482498    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 04:07:23.483110    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 04:07:23.483110    8140 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 04:07:23.483110    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 04:07:23.483801    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 04:07:23.484266    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 04:07:23.484659    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 04:07:23.485212    8140 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 04:07:23.485212    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:23.485212    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 04:07:23.485747    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 04:07:23.486012    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:25.785961    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:25.786152    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:25.786152    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:28.544673    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:07:28.545646    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:28.545843    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:07:28.655567    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 04:07:28.664274    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 04:07:28.696903    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 04:07:28.704165    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 04:07:28.744512    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 04:07:28.750583    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 04:07:28.786064    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 04:07:28.792648    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0520 04:07:28.830866    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 04:07:28.838643    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 04:07:28.873050    8140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 04:07:28.883462    8140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 04:07:28.904035    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:07:28.954088    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:07:29.005677    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:07:29.053434    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:07:29.117189    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 04:07:29.168983    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:07:29.218539    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:07:29.262883    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-291700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 04:07:29.309234    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:07:29.364734    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 04:07:29.417129    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 04:07:29.466025    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 04:07:29.501015    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 04:07:29.534436    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 04:07:29.568618    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0520 04:07:29.602186    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 04:07:29.635521    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 04:07:29.668296    8140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 04:07:29.714683    8140 ssh_runner.go:195] Run: openssl version
	I0520 04:07:29.736283    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 04:07:29.769442    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.776239    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.789027    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 04:07:29.810991    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 04:07:29.844774    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 04:07:29.878643    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.887557    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.901917    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 04:07:29.925508    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:07:29.963567    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:07:29.997787    8140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.008905    8140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.021933    8140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:07:30.043971    8140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:07:30.090670    8140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:07:30.097207    8140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 04:07:30.098418    8140 kubeadm.go:928] updating node {m03 172.25.246.110 8443 v1.30.1 docker true true} ...
	I0520 04:07:30.098418    8140 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-291700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.246.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:07:30.098418    8140 kube-vip.go:115] generating kube-vip config ...
	I0520 04:07:30.112363    8140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 04:07:30.137495    8140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 04:07:30.138230    8140 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.255.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 04:07:30.152347    8140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 04:07:30.174290    8140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 04:07:30.188191    8140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 04:07:30.208249    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 04:07:30.208396    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 04:07:30.208396    8140 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 04:07:30.208547    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:07:30.208547    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:07:30.228440    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 04:07:30.236307    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 04:07:30.236557    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 04:07:30.285879    8140 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:07:30.285879    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 04:07:30.286184    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 04:07:30.300086    8140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 04:07:30.329907    8140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 04:07:30.330335    8140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 04:07:31.555903    8140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 04:07:31.574872    8140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0520 04:07:31.608357    8140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:07:31.642522    8140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 04:07:31.693690    8140 ssh_runner.go:195] Run: grep 172.25.255.254	control-plane.minikube.internal$ /etc/hosts
	I0520 04:07:31.722153    8140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.255.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:07:31.760523    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:07:31.969876    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:07:32.000328    8140 host.go:66] Checking if "ha-291700" exists ...
	I0520 04:07:32.000328    8140 start.go:316] joinCluster: &{Name:ha-291700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-291700 Namespace:default APIServerHAVIP:172.25.255.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.119 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.251.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:07:32.001416    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 04:07:32.001416    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-291700 ).state
	I0520 04:07:34.256107    8140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:07:34.256163    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:34.256163    8140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-291700 ).networkadapters[0]).ipaddresses[0]
	I0520 04:07:36.963330    8140 main.go:141] libmachine: [stdout =====>] : 172.25.246.119
	
	I0520 04:07:36.963330    8140 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:07:36.963733    8140 sshutil.go:53] new ssh client: &{IP:172.25.246.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-291700\id_rsa Username:docker}
	I0520 04:07:37.177852    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1764281s)
	I0520 04:07:37.177852    8140 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:07:37.177852    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu63e6.9wbxciunnwhkook6 --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m03 --control-plane --apiserver-advertise-address=172.25.246.110 --apiserver-bind-port=8443"
	I0520 04:08:21.357924    8140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu63e6.9wbxciunnwhkook6 --discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-291700-m03 --control-plane --apiserver-advertise-address=172.25.246.110 --apiserver-bind-port=8443": (44.1800015s)
	I0520 04:08:21.357924    8140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 04:08:22.165043    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-291700-m03 minikube.k8s.io/updated_at=2024_05_20T04_08_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-291700 minikube.k8s.io/primary=false
	I0520 04:08:22.333910    8140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-291700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 04:08:22.492314    8140 start.go:318] duration metric: took 50.4919052s to joinCluster
	I0520 04:08:22.493294    8140 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.25.246.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:08:22.496287    8140 out.go:177] * Verifying Kubernetes components...
	I0520 04:08:22.493294    8140 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:08:22.514586    8140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:08:22.929437    8140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:08:22.970508    8140 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:08:22.971262    8140 kapi.go:59] client config for ha-291700: &rest.Config{Host:"https://172.25.255.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-291700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 04:08:22.971386    8140 kubeadm.go:477] Overriding stale ClientConfig host https://172.25.255.254:8443 with https://172.25.246.119:8443
	I0520 04:08:22.971967    8140 node_ready.go:35] waiting up to 6m0s for node "ha-291700-m03" to be "Ready" ...
	I0520 04:08:22.972372    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:22.972372    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:22.972372    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:22.972372    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:22.991537    8140 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0520 04:08:23.483443    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:23.483443    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:23.483443    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:23.483443    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:23.489131    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:23.975185    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:23.975248    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:23.975310    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:23.975310    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:23.980629    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:24.473704    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:24.473856    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:24.473856    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:24.473856    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:24.477477    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:24.986735    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:24.986735    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:24.986735    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:24.986826    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.013583    8140 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0520 04:08:25.015227    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:25.478831    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:25.478831    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:25.478831    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:25.478831    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.484414    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:25.982834    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:25.982834    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:25.982834    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:25.982834    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:25.987416    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:26.474666    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:26.474726    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:26.474782    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:26.474782    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:26.479482    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:26.984489    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:26.984489    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:26.984560    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:26.984560    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:26.988881    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:27.487357    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:27.487407    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:27.487407    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:27.487407    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:27.492551    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:27.493328    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:27.977344    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:27.977521    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:27.977521    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:27.977521    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:27.983501    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:28.486359    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:28.486569    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:28.486569    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:28.486569    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:28.490892    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:28.981851    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:28.981851    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:28.981851    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:28.981851    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.151834    8140 round_trippers.go:574] Response Status: 200 OK in 169 milliseconds
	I0520 04:08:29.487263    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:29.487263    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:29.487263    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.487263    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:29.493489    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:29.494320    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:29.975530    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:29.975530    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:29.975530    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:29.975530    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:29.992116    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:08:30.474490    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:30.474490    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:30.474616    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:30.474616    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:30.478916    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:30.978736    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:30.978930    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:30.978930    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:30.978930    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:30.983978    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:31.482442    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:31.482547    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:31.482547    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:31.482547    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:31.497518    8140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0520 04:08:31.499058    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:31.983321    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:31.983410    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:31.983410    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:31.983486    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:31.990824    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:32.482716    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:32.482835    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:32.482891    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:32.482891    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:32.488464    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:32.984907    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:32.985030    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:32.985030    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:32.985030    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:32.989461    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.485429    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:33.485429    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:33.485429    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:33.485647    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:33.490353    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.975187    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:33.975273    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:33.975333    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:33.975333    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:33.980195    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:33.980400    8140 node_ready.go:53] node "ha-291700-m03" has status "Ready":"False"
	I0520 04:08:34.478909    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:34.478909    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:34.478909    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:34.478909    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:34.485041    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:34.981699    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:34.982004    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:34.982004    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:34.982004    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:34.986814    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:35.481613    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:35.481894    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.481976    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.481976    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:35.487705    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:35.984343    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:35.984343    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.984343    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.984343    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:35.990172    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:35.990822    8140 node_ready.go:49] node "ha-291700-m03" has status "Ready":"True"
	I0520 04:08:35.990822    8140 node_ready.go:38] duration metric: took 13.0185341s for node "ha-291700-m03" to be "Ready" ...
	I0520 04:08:35.990900    8140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:08:35.991021    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:35.991021    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:35.991021    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:35.991074    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.011325    8140 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0520 04:08:36.022495    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.023055    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4hczp
	I0520 04:08:36.023130    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.023130    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.023130    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.028430    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.029717    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.029717    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.029717    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.029717    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.034371    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.035827    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.035878    8140 pod_ready.go:81] duration metric: took 13.383ms for pod "coredns-7db6d8ff4d-4hczp" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.035878    8140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.036015    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gglsg
	I0520 04:08:36.036015    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.036015    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.036015    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.040341    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:36.041280    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.041328    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.041328    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.041328    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.045649    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.045804    8140 pod_ready.go:92] pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.045804    8140 pod_ready.go:81] duration metric: took 9.926ms for pod "coredns-7db6d8ff4d-gglsg" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.045804    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.045804    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700
	I0520 04:08:36.045804    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.045804    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.045804    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.050919    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.052536    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.052634    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.052634    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.052634    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.060722    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:08:36.061874    8140 pod_ready.go:92] pod "etcd-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.061874    8140 pod_ready.go:81] duration metric: took 16.0703ms for pod "etcd-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.061874    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.061874    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m02
	I0520 04:08:36.061874    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.061874    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.062428    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.065510    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:36.067708    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:36.067708    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.067708    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.067708    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.071807    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.072443    8140 pod_ready.go:92] pod "etcd-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.072501    8140 pod_ready.go:81] duration metric: took 10.6271ms for pod "etcd-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.072501    8140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.188196    8140 request.go:629] Waited for 115.4247ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m03
	I0520 04:08:36.188414    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/etcd-ha-291700-m03
	I0520 04:08:36.188466    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.188466    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.188466    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.192939    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:36.394412    8140 request.go:629] Waited for 199.4678ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:36.394412    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:36.394412    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.394412    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.394412    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.402137    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:36.405608    8140 pod_ready.go:92] pod "etcd-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.405652    8140 pod_ready.go:81] duration metric: took 333.1504ms for pod "etcd-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.405707    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.587003    8140 request.go:629] Waited for 180.8893ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:08:36.587136    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700
	I0520 04:08:36.587136    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.587136    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.587258    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.598587    8140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 04:08:36.789108    8140 request.go:629] Waited for 188.4749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.789318    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:36.789318    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.789318    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.789318    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.794630    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:36.795876    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:36.795943    8140 pod_ready.go:81] duration metric: took 390.235ms for pod "kube-apiserver-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.795943    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:36.993081    8140 request.go:629] Waited for 196.8581ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:08:36.993081    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m02
	I0520 04:08:36.993380    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:36.993380    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:36.993501    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:36.999279    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.194590    8140 request.go:629] Waited for 194.9954ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:37.194859    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:37.194859    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.194859    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.194859    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.201462    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:37.202652    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.202815    8140 pod_ready.go:81] duration metric: took 406.8113ms for pod "kube-apiserver-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.202904    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.397048    8140 request.go:629] Waited for 194.0153ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m03
	I0520 04:08:37.397357    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-291700-m03
	I0520 04:08:37.397388    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.397388    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.397458    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.402211    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.597055    8140 request.go:629] Waited for 192.9502ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:37.597055    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:37.597055    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.597353    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.597353    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.602235    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.604228    8140 pod_ready.go:92] pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.604302    8140 pod_ready.go:81] duration metric: took 401.3972ms for pod "kube-apiserver-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.604302    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.800120    8140 request.go:629] Waited for 195.4894ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:08:37.800382    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700
	I0520 04:08:37.800382    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.800444    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.800444    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.805185    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:37.987466    8140 request.go:629] Waited for 179.9799ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:37.987466    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:37.987466    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:37.987466    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:37.987691    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:37.992873    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:37.994327    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:37.994327    8140 pod_ready.go:81] duration metric: took 389.9599ms for pod "kube-controller-manager-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:37.994389    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.195282    8140 request.go:629] Waited for 200.5386ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:08:38.195472    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m02
	I0520 04:08:38.195574    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.195574    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.195574    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.200312    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:38.399279    8140 request.go:629] Waited for 196.8346ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:38.399390    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:38.399482    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.399482    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.399482    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.406421    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:38.406624    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:38.406624    8140 pod_ready.go:81] duration metric: took 412.2344ms for pod "kube-controller-manager-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.406624    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.595145    8140 request.go:629] Waited for 188.5206ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m03
	I0520 04:08:38.595145    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-291700-m03
	I0520 04:08:38.595145    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.595145    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.595145    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.599049    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:38.799994    8140 request.go:629] Waited for 199.5237ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:38.800108    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:38.800108    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.800263    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.800263    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.805621    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:38.807012    8140 pod_ready.go:92] pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:38.807012    8140 pod_ready.go:81] duration metric: took 400.3877ms for pod "kube-controller-manager-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.807012    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:38.985717    8140 request.go:629] Waited for 177.9129ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:08:38.985830    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-94csf
	I0520 04:08:38.985830    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:38.985958    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:38.985958    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:38.991332    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:39.188202    8140 request.go:629] Waited for 195.7834ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:39.188452    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:39.188511    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.188511    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.188511    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.194248    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:39.195451    8140 pod_ready.go:92] pod "kube-proxy-94csf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.195451    8140 pod_ready.go:81] duration metric: took 388.4379ms for pod "kube-proxy-94csf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.195451    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qg9wf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.390538    8140 request.go:629] Waited for 195.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qg9wf
	I0520 04:08:39.390538    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qg9wf
	I0520 04:08:39.390538    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.390538    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.390538    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.399199    8140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 04:08:39.593956    8140 request.go:629] Waited for 193.5558ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:39.594252    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:39.594411    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.594459    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.594459    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.598092    8140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 04:08:39.599506    8140 pod_ready.go:92] pod "kube-proxy-qg9wf" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.599506    8140 pod_ready.go:81] duration metric: took 404.0549ms for pod "kube-proxy-qg9wf" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.599506    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.797395    8140 request.go:629] Waited for 197.8885ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:08:39.797787    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xq4tv
	I0520 04:08:39.797787    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.797787    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.797787    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.804623    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:39.985402    8140 request.go:629] Waited for 179.6838ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:39.985484    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:39.985484    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:39.985484    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:39.985484    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:39.992284    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:39.993159    8140 pod_ready.go:92] pod "kube-proxy-xq4tv" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:39.993159    8140 pod_ready.go:81] duration metric: took 393.6523ms for pod "kube-proxy-xq4tv" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:39.993159    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.187181    8140 request.go:629] Waited for 193.8794ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:08:40.187427    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700
	I0520 04:08:40.187427    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.187544    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.187544    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.193015    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:40.391516    8140 request.go:629] Waited for 197.1153ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:40.391702    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700
	I0520 04:08:40.391786    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.391786    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.391786    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.396047    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:40.397723    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:40.397723    8140 pod_ready.go:81] duration metric: took 404.5627ms for pod "kube-scheduler-ha-291700" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.397783    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.593760    8140 request.go:629] Waited for 195.5703ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:08:40.594063    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m02
	I0520 04:08:40.594063    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.594112    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.594112    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.601940    8140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 04:08:40.799065    8140 request.go:629] Waited for 195.9465ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:40.799174    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m02
	I0520 04:08:40.799249    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.799249    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.799249    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.804646    8140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 04:08:40.806917    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:40.806917    8140 pod_ready.go:81] duration metric: took 409.1338ms for pod "kube-scheduler-ha-291700-m02" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.806917    8140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:40.989453    8140 request.go:629] Waited for 182.201ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m03
	I0520 04:08:40.989609    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-291700-m03
	I0520 04:08:40.989733    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:40.989799    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:40.989824    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:40.995958    8140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 04:08:41.191888    8140 request.go:629] Waited for 193.7904ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:41.192176    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes/ha-291700-m03
	I0520 04:08:41.192176    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.192176    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.192176    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.203935    8140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0520 04:08:41.204761    8140 pod_ready.go:92] pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 04:08:41.204815    8140 pod_ready.go:81] duration metric: took 397.8973ms for pod "kube-scheduler-ha-291700-m03" in "kube-system" namespace to be "Ready" ...
	I0520 04:08:41.204815    8140 pod_ready.go:38] duration metric: took 5.2139068s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 04:08:41.204865    8140 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:08:41.217883    8140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:08:41.252275    8140 api_server.go:72] duration metric: took 18.7589511s to wait for apiserver process to appear ...
	I0520 04:08:41.252275    8140 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:08:41.252275    8140 api_server.go:253] Checking apiserver healthz at https://172.25.246.119:8443/healthz ...
	I0520 04:08:41.263288    8140 api_server.go:279] https://172.25.246.119:8443/healthz returned 200:
	ok
	I0520 04:08:41.263288    8140 round_trippers.go:463] GET https://172.25.246.119:8443/version
	I0520 04:08:41.263288    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.263288    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.263288    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.265654    8140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 04:08:41.265775    8140 api_server.go:141] control plane version: v1.30.1
	I0520 04:08:41.265838    8140 api_server.go:131] duration metric: took 13.5634ms to wait for apiserver health ...
	I0520 04:08:41.265896    8140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 04:08:41.394479    8140 request.go:629] Waited for 128.532ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.394912    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.395014    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.395014    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.395014    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.404880    8140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 04:08:41.415801    8140 system_pods.go:59] 24 kube-system pods found
	I0520 04:08:41.415801    8140 system_pods.go:61] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "etcd-ha-291700-m03" [321ff776-654f-4a7b-9973-5b6a672438b1] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kindnet-vdmtq" [12f186e5-765c-4bfe-aecc-91080f16c74d] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-apiserver-ha-291700-m03" [95739f9c-0bd0-4323-8b37-78d67b268722] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-controller-manager-ha-291700-m03" [087a33f0-bb7c-461c-8e4c-cb4e6198ea7a] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-qg9wf" [a66bf2e1-d8ed-4adf-b10c-71286a6f6856] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-scheduler-ha-291700-m03" [93c0b454-c40e-46ad-87c9-7afee261f119] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "kube-vip-ha-291700-m03" [10ac78f8-a12a-448b-8a5d-b456ae2c0a75] Running
	I0520 04:08:41.415801    8140 system_pods.go:61] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:08:41.416413    8140 system_pods.go:74] duration metric: took 150.5167ms to wait for pod list to return data ...
	I0520 04:08:41.416413    8140 default_sa.go:34] waiting for default service account to be created ...
	I0520 04:08:41.595629    8140 request.go:629] Waited for 179.0524ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:08:41.595886    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/default/serviceaccounts
	I0520 04:08:41.595964    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.595964    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.595964    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.600705    8140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 04:08:41.601432    8140 default_sa.go:45] found service account: "default"
	I0520 04:08:41.601432    8140 default_sa.go:55] duration metric: took 185.0181ms for default service account to be created ...
	I0520 04:08:41.601432    8140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 04:08:41.798156    8140 request.go:629] Waited for 196.5103ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.798262    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/namespaces/kube-system/pods
	I0520 04:08:41.798262    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.798262    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.798262    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:41.810861    8140 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 04:08:41.820595    8140 system_pods.go:86] 24 kube-system pods found
	I0520 04:08:41.820635    8140 system_pods.go:89] "coredns-7db6d8ff4d-4hczp" [e9af71af-6624-4b3b-bcb5-84f48dd3b338] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "coredns-7db6d8ff4d-gglsg" [9ee2aa9f-785d-4eaa-8044-1205a1a7fe63] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700" [80840c8d-6aaa-4363-94e1-93ee0b6522d9] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700-m02" [fd59f4df-51b4-4ce8-99e0-8c9833f6a408] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "etcd-ha-291700-m03" [321ff776-654f-4a7b-9973-5b6a672438b1] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-2sqwt" [ef18e49f-cb6a-4066-ba47-20d4d3f95dc7] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-kmktc" [b8c68e57-d57b-4c05-b3c3-edc4cb6bf7a9] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kindnet-vdmtq" [12f186e5-765c-4bfe-aecc-91080f16c74d] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700" [e413e43a-00f6-4f8b-a04f-84ecb6d8150b] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700-m02" [5f50c6f3-0937-4daf-8909-d101740084aa] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-apiserver-ha-291700-m03" [95739f9c-0bd0-4323-8b37-78d67b268722] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700" [57fe29d2-4776-41dd-8c7c-8dce07e29677] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m02" [f099c9f1-45b5-43d0-8559-c016a85350d0] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-controller-manager-ha-291700-m03" [087a33f0-bb7c-461c-8e4c-cb4e6198ea7a] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-94csf" [2dfdb4ba-d05c-486e-a025-41c788c2d39d] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-qg9wf" [a66bf2e1-d8ed-4adf-b10c-71286a6f6856] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-proxy-xq4tv" [de628e75-60e5-46c0-9fa4-3f7234526be3] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700" [122ad5a8-cb7c-473f-b622-bc318843562f] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700-m02" [452afc24-5b00-44d6-a169-179f44818f0f] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-scheduler-ha-291700-m03" [93c0b454-c40e-46ad-87c9-7afee261f119] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700" [2ab71c60-36d4-4a64-ab03-51daab9b4b4b] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700-m02" [bbce05d3-7924-4cd5-a41d-195b2e026e99] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "kube-vip-ha-291700-m03" [10ac78f8-a12a-448b-8a5d-b456ae2c0a75] Running
	I0520 04:08:41.820635    8140 system_pods.go:89] "storage-provisioner" [c0498ff6-95b6-4d4a-805f-9a972e3d3cee] Running
	I0520 04:08:41.820635    8140 system_pods.go:126] duration metric: took 219.2029ms to wait for k8s-apps to be running ...
	I0520 04:08:41.820635    8140 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 04:08:41.835562    8140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:08:41.861752    8140 system_svc.go:56] duration metric: took 41.1171ms WaitForService to wait for kubelet
	I0520 04:08:41.861825    8140 kubeadm.go:576] duration metric: took 19.3684662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:08:41.861869    8140 node_conditions.go:102] verifying NodePressure condition ...
	I0520 04:08:41.985546    8140 request.go:629] Waited for 123.3188ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.246.119:8443/api/v1/nodes
	I0520 04:08:41.985546    8140 round_trippers.go:463] GET https://172.25.246.119:8443/api/v1/nodes
	I0520 04:08:41.985546    8140 round_trippers.go:469] Request Headers:
	I0520 04:08:41.985546    8140 round_trippers.go:473]     Accept: application/json, */*
	I0520 04:08:41.985546    8140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 04:08:42.001878    8140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0520 04:08:42.003481    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003539    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003539    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003539    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003539    8140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 04:08:42.003621    8140 node_conditions.go:123] node cpu capacity is 2
	I0520 04:08:42.003621    8140 node_conditions.go:105] duration metric: took 141.7526ms to run NodePressure ...
	I0520 04:08:42.003621    8140 start.go:240] waiting for startup goroutines ...
	I0520 04:08:42.003682    8140 start.go:254] writing updated cluster config ...
	I0520 04:08:42.017550    8140 ssh_runner.go:195] Run: rm -f paused
	I0520 04:08:42.173136    8140 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 04:08:42.176901    8140 out.go:177] * Done! kubectl is now configured to use "ha-291700" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565328117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565348417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.565854715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571383494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571506693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571541093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:00:48 ha-291700 dockerd[1334]: time="2024-05-20T11:00:48.571663893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185697497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185830596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.185848596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 dockerd[1334]: time="2024-05-20T11:09:22.186412694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:22 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:09:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25e887ed0ea02f96e2033349269707177648515aee3e13d0ee9f7bd9a5aa2d79/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 11:09:23 ha-291700 cri-dockerd[1231]: time="2024-05-20T11:09:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004132614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004356714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004410814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:09:24 ha-291700 dockerd[1334]: time="2024-05-20T11:09:24.004697114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 11:10:30 ha-291700 dockerd[1328]: 2024/05/20 11:10:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:30 ha-291700 dockerd[1328]: 2024/05/20 11:10:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:10:31 ha-291700 dockerd[1328]: 2024/05/20 11:10:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a097917d5adbc       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   25e887ed0ea02       busybox-fc5497c4f-mw76w
	3d297fccb427c       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   841ca5a27ffe7       coredns-7db6d8ff4d-gglsg
	09c232a7fe7e5       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   00680e87d9b50       coredns-7db6d8ff4d-4hczp
	5e4ba8270bed1       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   d4369c807fc4d       storage-provisioner
	7534bdef6bb33       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   88196fa9961d3       kindnet-kmktc
	32c1915a2e00e       747097150317f                                                                                         26 minutes ago      Running             kube-proxy                0                   3a957403893c5       kube-proxy-xq4tv
	78ba28a57aa21       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   5abdcfb1f2b5a       kube-vip-ha-291700
	bac466f3cb7a4       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   2b4cf80fdf2bb       kube-scheduler-ha-291700
	290a4be470427       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   49d1fcba87695       kube-controller-manager-ha-291700
	7f57044b1f70d       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   cb147cc0e7076       kube-apiserver-ha-291700
	2a187608d3c68       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   fe571feda5f80       etcd-ha-291700
	
	
	==> coredns [09c232a7fe7e] <==
	[INFO] 10.244.2.2:45969 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001005s
	[INFO] 10.244.2.2:42189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.022200702s
	[INFO] 10.244.2.2:51932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001475s
	[INFO] 10.244.2.2:40364 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001901s
	[INFO] 10.244.2.2:49544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012833401s
	[INFO] 10.244.0.4:54139 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001521s
	[INFO] 10.244.0.4:38161 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000622s
	[INFO] 10.244.0.4:49859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000877s
	[INFO] 10.244.0.4:53896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000081s
	[INFO] 10.244.0.4:42252 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001581s
	[INFO] 10.244.0.4:45872 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0004665s
	[INFO] 10.244.1.2:49750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001148s
	[INFO] 10.244.1.2:44851 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000849s
	[INFO] 10.244.1.2:57033 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001421s
	[INFO] 10.244.2.2:52593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002442s
	[INFO] 10.244.2.2:43583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081s
	[INFO] 10.244.0.4:47883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001217s
	[INFO] 10.244.0.4:37129 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176201s
	[INFO] 10.244.1.2:36237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000772s
	[INFO] 10.244.2.2:52455 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002387s
	[INFO] 10.244.2.2:57533 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000934s
	[INFO] 10.244.0.4:33879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187s
	[INFO] 10.244.0.4:49457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174801s
	[INFO] 10.244.1.2:60139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002037s
	[INFO] 10.244.1.2:43968 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001128s
	
	
	==> coredns [3d297fccb427] <==
	[INFO] 10.244.1.2:35967 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000099s
	[INFO] 10.244.1.2:55663 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.024044752s
	[INFO] 10.244.2.2:50767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001264s
	[INFO] 10.244.2.2:46797 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.2.2:52977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002519s
	[INFO] 10.244.0.4:38084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002047s
	[INFO] 10.244.0.4:34960 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000987s
	[INFO] 10.244.1.2:40319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.080228512s
	[INFO] 10.244.1.2:57732 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003277s
	[INFO] 10.244.1.2:33154 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002549s
	[INFO] 10.244.1.2:42569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012372802s
	[INFO] 10.244.1.2:55813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000618s
	[INFO] 10.244.2.2:51764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002585s
	[INFO] 10.244.2.2:34629 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000609s
	[INFO] 10.244.0.4:57039 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001498s
	[INFO] 10.244.0.4:52530 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001986s
	[INFO] 10.244.1.2:50976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001901s
	[INFO] 10.244.1.2:60696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000619s
	[INFO] 10.244.1.2:59375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001456s
	[INFO] 10.244.2.2:57839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001713s
	[INFO] 10.244.2.2:48189 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001798s
	[INFO] 10.244.0.4:32914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002553s
	[INFO] 10.244.0.4:50478 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000595s
	[INFO] 10.244.1.2:48046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003657s
	[INFO] 10.244.1.2:53608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001944s
	
	
	==> describe nodes <==
	Name:               ha-291700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T04_00_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:00:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:27:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:25:11 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:25:11 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:25:11 +0000   Mon, 20 May 2024 11:00:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:25:11 +0000   Mon, 20 May 2024 11:00:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.246.119
	  Hostname:    ha-291700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba3a75213dec414eb3ca40f5e8b787a6
	  System UUID:                1bf698ac-7375-c44d-af40-b09309c0ada8
	  Boot ID:                    9daea59b-2ac2-44db-b81f-2140148dd0a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mw76w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-4hczp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-gglsg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-291700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-kmktc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-291700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-291700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-xq4tv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-291700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-291700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-291700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-291700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-291700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-291700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-291700 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-291700 event: Registered Node ha-291700 in Controller
	
	
	Name:               ha-291700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T04_04_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:04:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:26:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:25:13 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:25:13 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:25:13 +0000   Mon, 20 May 2024 11:04:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:25:13 +0000   Mon, 20 May 2024 11:04:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.251.208
	  Hostname:    ha-291700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3554ae6a627e456685a6463794338840
	  System UUID:                a11b3769-33c5-2a4a-83c1-fcb6337901f4
	  Boot ID:                    586f963f-f3bf-4b1e-987d-f03d167c3bd0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qxg28                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-291700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-2sqwt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-291700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-291700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-94csf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-291700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-291700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-291700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-291700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-291700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-291700-m02 event: Registered Node ha-291700-m02 in Controller
	
	
	Name:               ha-291700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T04_08_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:08:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:27:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:25:05 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:25:05 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:25:05 +0000   Mon, 20 May 2024 11:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:25:05 +0000   Mon, 20 May 2024 11:08:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.246.110
	  Hostname:    ha-291700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6effe892fc604c7ca670493903395dda
	  System UUID:                1afc6042-4034-c244-b427-bbf53c43dbc9
	  Boot ID:                    56458d60-9ccb-4191-b42c-5cbabce2dfac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bghlc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-291700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-vdmtq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-291700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-291700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-qg9wf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-291700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-291700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-291700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-291700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-291700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-291700-m03 event: Registered Node ha-291700-m03 in Controller
	
	
	Name:               ha-291700-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-291700-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-291700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T04_13_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:13:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-291700-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:27:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:24:32 +0000   Mon, 20 May 2024 11:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:24:32 +0000   Mon, 20 May 2024 11:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:24:32 +0000   Mon, 20 May 2024 11:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:24:32 +0000   Mon, 20 May 2024 11:14:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.244.211
	  Hostname:    ha-291700-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe4bb6759da746109fbd8d1433113f04
	  System UUID:                93d87da1-ff32-4949-9d82-762cce08f25c
	  Boot ID:                    c7c8b244-0651-45ee-94d0-080b664227ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4v97g       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bcbrt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-291700-m04 event: Registered Node ha-291700-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-291700-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-291700-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-291700-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-291700-m04 event: Registered Node ha-291700-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-291700-m04 event: Registered Node ha-291700-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-291700-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.878927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000071] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 10:59] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.182683] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[ +32.195135] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.115863] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.573509] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.204047] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.242377] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.799310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.204162] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.226901] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.322093] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[May20 11:00] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.105267] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.530810] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +5.508373] systemd-fstab-generator[1712]: Ignoring "noauto" option for root device
	[  +0.107158] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.027945] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.591477] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[ +13.802267] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.799365] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.126041] kauditd_printk_skb: 19 callbacks suppressed
	[May20 11:04] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2a187608d3c6] <==
	{"level":"info","ts":"2024-05-20T11:27:36.819112Z","caller":"traceutil/trace.go:171","msg":"trace[1760466158] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:29; response_revision:4660; }","duration":"271.948514ms","start":"2024-05-20T11:27:36.547153Z","end":"2024-05-20T11:27:36.819101Z","steps":["trace[1760466158] 'agreement among raft nodes before linearized reading'  (duration: 271.571817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:27:36.872326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.87262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.873194Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.874593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.881301Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.983214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-05-20T11:27:36.881492Z","caller":"traceutil/trace.go:171","msg":"trace[1814191113] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:4660; }","duration":"153.208212ms","start":"2024-05-20T11:27:36.728271Z","end":"2024-05-20T11:27:36.881479Z","steps":["trace[1814191113] 'agreement among raft nodes before linearized reading'  (duration: 146.994764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:27:36.889966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.931443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.939124Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.944311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.959642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.96524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:36.973274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.007219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.066147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.116417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.266286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/ha-291700-m04\" ","response":"range_response_count:1 size:556"}
	{"level":"info","ts":"2024-05-20T11:27:37.117047Z","caller":"traceutil/trace.go:171","msg":"trace[1709180044] range","detail":"{range_begin:/registry/leases/kube-node-lease/ha-291700-m04; range_end:; response_count:1; response_revision:4661; }","duration":"109.95798ms","start":"2024-05-20T11:27:37.007072Z","end":"2024-05-20T11:27:37.11703Z","steps":["trace[1709180044] 'range keys from in-memory index tree'  (duration: 107.178003ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:27:37.166263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.232974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.266058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.291644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.361524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.372556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T11:27:37.373307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1f16a871a6f5df87","from":"1f16a871a6f5df87","remote-peer-id":"6efbaf66d8f2dbf2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:27:37 up 29 min,  0 users,  load average: 1.28, 0.54, 0.41
	Linux ha-291700 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7534bdef6bb3] <==
	I0520 11:27:07.212125       1 main.go:250] Node ha-291700-m04 has CIDR [10.244.3.0/24] 
	I0520 11:27:17.224656       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:27:17.224761       1 main.go:227] handling current node
	I0520 11:27:17.224778       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:27:17.224786       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:27:17.224912       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:27:17.224945       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:27:17.225100       1 main.go:223] Handling node with IPs: map[172.25.244.211:{}]
	I0520 11:27:17.225180       1 main.go:250] Node ha-291700-m04 has CIDR [10.244.3.0/24] 
	I0520 11:27:27.275900       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:27:27.275950       1 main.go:227] handling current node
	I0520 11:27:27.275963       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:27:27.275970       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:27:27.276411       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:27:27.276585       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:27:27.276911       1 main.go:223] Handling node with IPs: map[172.25.244.211:{}]
	I0520 11:27:27.277163       1 main.go:250] Node ha-291700-m04 has CIDR [10.244.3.0/24] 
	I0520 11:27:37.296204       1 main.go:223] Handling node with IPs: map[172.25.246.119:{}]
	I0520 11:27:37.296319       1 main.go:227] handling current node
	I0520 11:27:37.296333       1 main.go:223] Handling node with IPs: map[172.25.251.208:{}]
	I0520 11:27:37.296340       1 main.go:250] Node ha-291700-m02 has CIDR [10.244.1.0/24] 
	I0520 11:27:37.296547       1 main.go:223] Handling node with IPs: map[172.25.246.110:{}]
	I0520 11:27:37.296578       1 main.go:250] Node ha-291700-m03 has CIDR [10.244.2.0/24] 
	I0520 11:27:37.296754       1 main.go:223] Handling node with IPs: map[172.25.244.211:{}]
	I0520 11:27:37.296847       1 main.go:250] Node ha-291700-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7f57044b1f70] <==
	E0520 11:09:35.160346       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61772: use of closed network connection
	E0520 11:09:35.652084       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61774: use of closed network connection
	E0520 11:09:36.113270       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61776: use of closed network connection
	E0520 11:09:36.907315       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61779: use of closed network connection
	E0520 11:09:47.372276       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61781: use of closed network connection
	E0520 11:09:47.848922       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61784: use of closed network connection
	E0520 11:09:58.309706       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61786: use of closed network connection
	E0520 11:09:58.764663       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61788: use of closed network connection
	E0520 11:10:09.222195       1 conn.go:339] Error on socket receive: read tcp 172.25.255.254:8443->172.25.240.1:61790: use of closed network connection
	I0520 11:14:03.515272       1 trace.go:236] Trace[983782683]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:658a003e-a585-4c84-8776-8846ba67a3d3,client:172.25.246.110,api-group:coordination.k8s.io,api-version:v1,name:ha-291700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-291700-m03,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (20-May-2024 11:14:02.994) (total time: 520ms):
	Trace[983782683]: ["GuaranteedUpdate etcd3" audit-id:658a003e-a585-4c84-8776-8846ba67a3d3,key:/leases/kube-node-lease/ha-291700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 520ms (11:14:02.995)
	Trace[983782683]:  ---"Txn call completed" 519ms (11:14:03.515)]
	Trace[983782683]: [520.372269ms] [520.372269ms] END
	W0520 11:27:11.251276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.246.110 172.25.246.119]
	I0520 11:27:29.164749       1 trace.go:236] Trace[1691502618]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d61822ae-eb47-42d2-90cd-fec697ff301c,client:172.25.244.211,api-group:coordination.k8s.io,api-version:v1,name:ha-291700-m04,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-291700-m04,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (20-May-2024 11:27:28.597) (total time: 566ms):
	Trace[1691502618]: ["GuaranteedUpdate etcd3" audit-id:d61822ae-eb47-42d2-90cd-fec697ff301c,key:/leases/kube-node-lease/ha-291700-m04,type:*coordination.Lease,resource:leases.coordination.k8s.io 566ms (11:27:28.597)
	Trace[1691502618]:  ---"Txn call completed" 566ms (11:27:29.164)]
	Trace[1691502618]: [566.983261ms] [566.983261ms] END
	I0520 11:27:31.830558       1 trace.go:236] Trace[1398385879]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.25.246.119,type:*v1.Endpoints,resource:apiServerIPInfo (20-May-2024 11:27:31.221) (total time: 609ms):
	Trace[1398385879]: ---"Txn call completed" 555ms (11:27:31.830)
	Trace[1398385879]: [609.328858ms] [609.328858ms] END
	I0520 11:27:31.830923       1 trace.go:236] Trace[374859303]: "Update" accept:application/json, */*,audit-id:6460b00b-ef7a-49e3-9eea-a2edd0617c82,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (20-May-2024 11:27:31.277) (total time: 553ms):
	Trace[374859303]: ["GuaranteedUpdate etcd3" audit-id:6460b00b-ef7a-49e3-9eea-a2edd0617c82,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 552ms (11:27:31.278)
	Trace[374859303]:  ---"Txn call completed" 551ms (11:27:31.830)]
	Trace[374859303]: [553.115342ms] [553.115342ms] END
	
	
	==> kube-controller-manager [290a4be47042] <==
	E0520 11:09:22.007126       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0520 11:09:22.046669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.298142ms"
	I0520 11:09:22.124606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.62149ms"
	I0520 11:09:22.125406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="671.297µs"
	I0520 11:09:22.305625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.399µs"
	I0520 11:09:23.426054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.4µs"
	I0520 11:09:23.618045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.7µs"
	I0520 11:09:23.696193       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.8µs"
	I0520 11:09:23.709482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.2µs"
	I0520 11:09:23.739765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.5µs"
	I0520 11:09:23.764566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125µs"
	I0520 11:09:23.785591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.1µs"
	I0520 11:09:24.599066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.427265ms"
	I0520 11:09:24.599528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="271.2µs"
	I0520 11:09:24.718090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.468793ms"
	I0520 11:09:24.718718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.3µs"
	I0520 11:09:25.641082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.5µs"
	I0520 11:09:25.674477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.3µs"
	I0520 11:09:28.540601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.278793ms"
	I0520 11:09:28.540695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.2µs"
	E0520 11:13:50.931499       1 certificate_controller.go:146] Sync csr-jngql failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jngql": the object has been modified; please apply your changes to the latest version and try again
	I0520 11:13:51.028185       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-291700-m04\" does not exist"
	I0520 11:13:51.049319       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-291700-m04" podCIDRs=["10.244.3.0/24"]
	I0520 11:13:55.544775       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-291700-m04"
	I0520 11:14:14.353322       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-291700-m04"
	
	
	==> kube-proxy [32c1915a2e00] <==
	I0520 11:00:37.083671       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:00:37.098636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.246.119"]
	I0520 11:00:37.201278       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:00:37.201324       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:00:37.201343       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:00:37.205129       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:00:37.205524       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:00:37.205861       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:00:37.207223       1 config.go:192] "Starting service config controller"
	I0520 11:00:37.207399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:00:37.207583       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:00:37.207793       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:00:37.208531       1 config.go:319] "Starting node config controller"
	I0520 11:00:37.209754       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:00:37.307837       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:00:37.309194       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:00:37.310221       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bac466f3cb7a] <==
	W0520 11:00:20.088940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:00:20.089020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:00:20.114740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:00:20.114823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:00:20.266313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:00:20.266384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:00:20.272933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:00:20.273385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:00:20.294610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:00:20.294697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0520 11:00:21.775200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:09:21.375594       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="0fcf49f9-053c-4345-9b79-044a9cf79f4c" pod="default/busybox-fc5497c4f-qxg28" assumedNode="ha-291700-m02" currentNode="ha-291700"
	I0520 11:09:21.387414       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="25cad6b2-46c1-4591-8bc4-c096c9866cfe" pod="default/busybox-fc5497c4f-sj7kv" assumedNode="ha-291700-m03" currentNode="ha-291700-m02"
	E0520 11:09:21.419242       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qxg28\": pod busybox-fc5497c4f-qxg28 is already assigned to node \"ha-291700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qxg28" node="ha-291700"
	E0520 11:09:21.419504       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0fcf49f9-053c-4345-9b79-044a9cf79f4c(default/busybox-fc5497c4f-qxg28) was assumed on ha-291700 but assigned to ha-291700-m02" pod="default/busybox-fc5497c4f-qxg28"
	E0520 11:09:21.419551       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qxg28\": pod busybox-fc5497c4f-qxg28 is already assigned to node \"ha-291700-m02\"" pod="default/busybox-fc5497c4f-qxg28"
	I0520 11:09:21.419619       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qxg28" node="ha-291700-m02"
	E0520 11:09:21.423531       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sj7kv\": pod busybox-fc5497c4f-sj7kv is already assigned to node \"ha-291700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-sj7kv" node="ha-291700-m02"
	E0520 11:09:21.423592       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 25cad6b2-46c1-4591-8bc4-c096c9866cfe(default/busybox-fc5497c4f-sj7kv) was assumed on ha-291700-m02 but assigned to ha-291700-m03" pod="default/busybox-fc5497c4f-sj7kv"
	E0520 11:09:21.423610       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sj7kv\": pod busybox-fc5497c4f-sj7kv is already assigned to node \"ha-291700-m03\"" pod="default/busybox-fc5497c4f-sj7kv"
	I0520 11:09:21.423632       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-sj7kv" node="ha-291700-m03"
	E0520 11:09:21.569128       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mw76w\": pod busybox-fc5497c4f-mw76w is already assigned to node \"ha-291700\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mw76w" node="ha-291700"
	E0520 11:09:21.571103       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0710240d-711a-43a0-bbee-82236e00bbef(default/busybox-fc5497c4f-mw76w) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mw76w"
	E0520 11:09:21.571309       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mw76w\": pod busybox-fc5497c4f-mw76w is already assigned to node \"ha-291700\"" pod="default/busybox-fc5497c4f-mw76w"
	I0520 11:09:21.571752       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mw76w" node="ha-291700"
	
	
	==> kubelet <==
	May 20 11:23:23 ha-291700 kubelet[2213]: E0520 11:23:23.366591    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:23:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:23:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:23:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:23:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:24:23 ha-291700 kubelet[2213]: E0520 11:24:23.367089    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:24:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:24:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:24:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:24:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:25:23 ha-291700 kubelet[2213]: E0520 11:25:23.364352    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:25:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:25:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:25:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:25:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:26:23 ha-291700 kubelet[2213]: E0520 11:26:23.364528    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:26:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:26:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:26:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:26:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:27:23 ha-291700 kubelet[2213]: E0520 11:27:23.365522    2213 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:27:23 ha-291700 kubelet[2213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:27:23 ha-291700 kubelet[2213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:27:23 ha-291700 kubelet[2213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:27:23 ha-291700 kubelet[2213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:27:24.727791    9564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-291700 -n ha-291700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-291700 -n ha-291700: (13.0511286s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-291700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (61.82s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (481.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-093300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0520 05:00:25.059489    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 05:03:04.565007    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:05:25.053789    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-093300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (7m24.5400744s)

                                                
                                                
-- stdout --
	* [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.25.248.197
	  - NO_PROXY=172.25.248.197
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:58:42.814068    4324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	* 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-093300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.7278861s)
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.8969052s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                    |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p json-output-395900                     | json-output-395900       | testUser          | v1.33.1 | 20 May 24 04:38 PDT | 20 May 24 04:38 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| unpause | -p json-output-395900                     | json-output-395900       | testUser          | v1.33.1 | 20 May 24 04:38 PDT | 20 May 24 04:39 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| stop    | -p json-output-395900                     | json-output-395900       | testUser          | v1.33.1 | 20 May 24 04:39 PDT | 20 May 24 04:39 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| delete  | -p json-output-395900                     | json-output-395900       | minikube1\jenkins | v1.33.1 | 20 May 24 04:39 PDT | 20 May 24 04:39 PDT |
	| start   | -p json-output-error-753700               | json-output-error-753700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:39 PDT |                     |
	|         | --memory=2200 --output=json               |                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                 |                          |                   |         |                     |                     |
	| delete  | -p json-output-error-753700               | json-output-error-753700 | minikube1\jenkins | v1.33.1 | 20 May 24 04:39 PDT | 20 May 24 04:40 PDT |
	| start   | -p first-701000                           | first-701000             | minikube1\jenkins | v1.33.1 | 20 May 24 04:40 PDT | 20 May 24 04:43 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| start   | -p second-701000                          | second-701000            | minikube1\jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:46 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| delete  | -p second-701000                          | second-701000            | minikube1\jenkins | v1.33.1 | 20 May 24 04:47 PDT | 20 May 24 04:48 PDT |
	| delete  | -p first-701000                           | first-701000             | minikube1\jenkins | v1.33.1 | 20 May 24 04:48 PDT | 20 May 24 04:49 PDT |
	| start   | -p mount-start-1-859800                   | mount-start-1-859800     | minikube1\jenkins | v1.33.1 | 20 May 24 04:49 PDT | 20 May 24 04:51 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-1-859800     | minikube1\jenkins | v1.33.1 | 20 May 24 04:51 PDT |                     |
	|         | --profile mount-start-1-859800 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-1-859800 ssh -- ls            | mount-start-1-859800     | minikube1\jenkins | v1.33.1 | 20 May 24 04:51 PDT | 20 May 24 04:51 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| start   | -p mount-start-2-931300                   | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:51 PDT | 20 May 24 04:54 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | --profile mount-start-2-931300 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-931300 ssh -- ls            | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:54 PDT | 20 May 24 04:54 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-1-859800                   | mount-start-1-859800     | minikube1\jenkins | v1.33.1 | 20 May 24 04:54 PDT | 20 May 24 04:55 PDT |
	|         | --alsologtostderr -v=5                    |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-931300 ssh -- ls            | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:55 PDT | 20 May 24 04:55 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| stop    | -p mount-start-2-931300                   | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:55 PDT | 20 May 24 04:55 PDT |
	| start   | -p mount-start-2-931300                   | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:55 PDT | 20 May 24 04:58 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --profile mount-start-2-931300 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-931300 ssh -- ls            | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-2-931300                   | mount-start-2-931300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| delete  | -p mount-start-1-859800                   | mount-start-1-859800     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| start   | -p multinode-093300                       | multinode-093300         | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --wait=true --memory=2200                 |                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                            |                          |                   |         |                     |                     |
	|         | --alsologtostderr                         |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:02:18 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:18.329348061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:18 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:02:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6cad91522eaf0eb11be29bc0ef9d53e92130ff551b27d7261803446743fe43/resolv.conf as [nameserver 172.25.240.1]"
	May 20 12:02:24 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:02:24Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240202-8f1494ea: Status: Downloaded newer image for kindest/kindnetd:v20240202-8f1494ea"
	May 20 12:02:24 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:24.594944690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:24 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:24.595046890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:24 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:24.595067590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:24 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:24.595230190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.754110686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.754326286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.754388386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.755550186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.768812788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.768918788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.769168788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:27 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:27.769393988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:27 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:02:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe98a09c9c2b4d3afa47502a3c9e54d8c2e0c2428d6a66d8b916ceb901a4362e/resolv.conf as [nameserver 172.25.240.1]"
	May 20 12:02:28 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:02:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad5e2e80d0f28856aa9cb453c7c2aee42bbb70188d134c719604a132807209b7/resolv.conf as [nameserver 172.25.240.1]"
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.155807549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.155995549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.156012149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.156118350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.313838662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314175663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314265163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314435463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2f3e10de8772       cbb01a7bd410d                                                                              4 minutes ago       Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                              4 minutes ago       Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   4 minutes ago       Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                              4 minutes ago       Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                              4 minutes ago       Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                              4 minutes ago       Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                              4 minutes ago       Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                              4 minutes ago       Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:06:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:02:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:02:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:02:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:02:33 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m13s
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m13s
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m10s  kube-proxy       
	  Normal  Starting                 4m27s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s  kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s  kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s  kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s  node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                4m3s   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.732218] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:01:56.872333Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"35fa5479c1404576","initial-advertise-peer-urls":["https://172.25.248.197:2380"],"listen-peer-urls":["https://172.25.248.197:2380"],"advertise-client-urls":["https://172.25.248.197:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.248.197:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T12:01:56.872628Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T12:01:56.873591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 switched to configuration voters=(3889514110097835382)"}
	{"level":"info","ts":"2024-05-20T12:01:56.873966Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6de7b93236da1ce","local-member-id":"35fa5479c1404576","added-peer-id":"35fa5479c1404576","added-peer-peer-urls":["https://172.25.248.197:2380"]}
	{"level":"info","ts":"2024-05-20T12:01:56.874187Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.25.248.197:2380"}
	{"level":"info","ts":"2024-05-20T12:01:56.878711Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.25.248.197:2380"}
	{"level":"info","ts":"2024-05-20T12:01:57.674791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.674924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.67506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgPreVoteResp from 35fa5479c1404576 at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.675121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.67515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgVoteResp from 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 35fa5479c1404576 elected leader 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.683796Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.68998Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"35fa5479c1404576","local-member-attributes":"{Name:multinode-093300 ClientURLs:[https://172.25.248.197:2379]}","request-path":"/0/members/35fa5479c1404576/attributes","cluster-id":"6de7b93236da1ce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T12:01:57.690259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.690793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.691358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.693751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.701267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.712542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.248.197:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.733534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6de7b93236da1ce","local-member-id":"35fa5479c1404576","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.738861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.739348Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:02:43.609464Z","caller":"traceutil/trace.go:171","msg":"trace[355698758] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"126.890272ms","start":"2024-05-20T12:02:43.482555Z","end":"2024-05-20T12:02:43.609446Z","steps":["trace[355698758] 'process raft request'  (duration: 126.74047ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:06:29 up 6 min,  0 users,  load average: 0.37, 0.40, 0.21
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:04:25.709940       1 main.go:227] handling current node
	I0520 12:04:35.735634       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:04:35.735781       1 main.go:227] handling current node
	I0520 12:04:45.750971       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:04:45.751117       1 main.go:227] handling current node
	I0520 12:04:55.763593       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:04:55.764240       1 main.go:227] handling current node
	I0520 12:05:05.769581       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:05.769694       1 main.go:227] handling current node
	I0520 12:05:15.782026       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:15.782104       1 main.go:227] handling current node
	I0520 12:05:25.792062       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:25.792087       1 main.go:227] handling current node
	I0520 12:05:35.806507       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:35.806711       1 main.go:227] handling current node
	I0520 12:05:45.817926       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:45.818020       1 main.go:227] handling current node
	I0520 12:05:55.823972       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:05:55.824067       1 main.go:227] handling current node
	I0520 12:06:05.838316       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:06:05.838450       1 main.go:227] handling current node
	I0520 12:06:15.843140       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:06:15.843258       1 main.go:227] handling current node
	I0520 12:06:25.850048       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:06:25.850216       1 main.go:227] handling current node
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:01:59.567183       1 policy_source.go:224] refreshing policies
	I0520 12:01:59.591160       1 controller.go:615] quota admission added evaluator for: namespaces
	E0520 12:01:59.694187       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0520 12:01:59.694281       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0520 12:01:59.902619       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:15.881107       1 shared_informer.go:320] Caches are synced for namespace
	I0520 12:02:15.884944       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 12:02:15.892027       1 shared_informer.go:320] Caches are synced for ephemeral
	I0520 12:02:15.894186       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:02:15.898568       1 shared_informer.go:320] Caches are synced for service account
	I0520 12:02:15.915786       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 12:02:15.925962       1 shared_informer.go:320] Caches are synced for PVC protection
	I0520 12:02:15.939786       1 shared_informer.go:320] Caches are synced for expand
	I0520 12:02:15.949136       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 12:02:15.950501       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:02:15.982781       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 12:02:16.379630       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.379657       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 12:02:16.417564       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.906228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="303.284225ms"
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:02:27 multinode-093300 kubelet[2141]: I0520 12:02:27.123136    2141 topology_manager.go:215] "Topology Admit Handler" podUID="602cea4d-2fe9-49e2-a7f4-87da56d86428" podNamespace="kube-system" podName="storage-provisioner"
	May 20 12:02:27 multinode-093300 kubelet[2141]: I0520 12:02:27.135277    2141 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26nj7\" (UniqueName: \"kubernetes.io/projected/602cea4d-2fe9-49e2-a7f4-87da56d86428-kube-api-access-26nj7\") pod \"storage-provisioner\" (UID: \"602cea4d-2fe9-49e2-a7f4-87da56d86428\") " pod="kube-system/storage-provisioner"
	May 20 12:02:27 multinode-093300 kubelet[2141]: I0520 12:02:27.135333    2141 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/602cea4d-2fe9-49e2-a7f4-87da56d86428-tmp\") pod \"storage-provisioner\" (UID: \"602cea4d-2fe9-49e2-a7f4-87da56d86428\") " pod="kube-system/storage-provisioner"
	May 20 12:02:28 multinode-093300 kubelet[2141]: I0520 12:02:28.037009    2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad5e2e80d0f28856aa9cb453c7c2aee42bbb70188d134c719604a132807209b7"
	May 20 12:02:29 multinode-093300 kubelet[2141]: I0520 12:02:29.123008    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.122988089 podStartE2EDuration="4.122988089s" podCreationTimestamp="2024-05-20 12:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 12:02:29.105238585 +0000 UTC m=+26.571719379" watchObservedRunningTime="2024-05-20 12:02:29.122988089 +0000 UTC m=+26.589468783"
	May 20 12:03:02 multinode-093300 kubelet[2141]: E0520 12:03:02.791235    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:03:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:03:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:03:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:03:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:04:02 multinode-093300 kubelet[2141]: E0520 12:04:02.779671    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:04:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:04:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:04:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:04:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:05:02 multinode-093300 kubelet[2141]: E0520 12:05:02.780633    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:05:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:05:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:05:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:05:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:06:02 multinode-093300 kubelet[2141]: E0520 12:06:02.781600    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:06:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:06:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:06:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:06:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [2842c911dbc8] <==
	I0520 12:02:28.399856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:02:28.434390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:02:28.436460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:02:28.452812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:02:28.453576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	I0520 12:02:28.454925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"378535d4-051f-4c38-8167-adef61b820bc", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3 became leader
	I0520 12:02:28.557085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:06:21.114059   10064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.7925929s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (481.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (723.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- rollout status deployment/busybox
E0520 05:06:48.283207    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 05:08:04.573040    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:10:25.056279    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 05:12:47.803904    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:13:04.560154    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:15:25.065821    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- rollout status deployment/busybox: exit status 1 (10m3.5669454s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:06:44.595726    5272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:16:48.176438   10264 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:16:49.634959    1632 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:16:51.216337    5680 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:16:53.693043   14132 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:16:57.393485    6068 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:17:04.895935   15088 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:17:16.102465    7608 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:17:30.117882    5440 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:17:48.044960    8116 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0520 05:18:04.576190    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:18:07.016676   11752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0520 05:18:07.016676   11752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.io: exit status 1 (343.0975ms)

                                                
                                                
** stderr ** 
	W0520 05:18:07.685404    3768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-ncmp8 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-ncmp8 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- nslookup kubernetes.io: (1.9933859s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.default: exit status 1 (352.1509ms)

                                                
                                                
** stderr ** 
	W0520 05:18:10.033940     276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-ncmp8 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-ncmp8 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (331.168ms)

                                                
                                                
** stderr ** 
	W0520 05:18:10.844473    9900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-ncmp8 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-ncmp8 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.7271386s)
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.7571542s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-931300                           | mount-start-2-931300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:55 PDT | 20 May 24 04:58 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-931300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --profile mount-start-2-931300 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-931300 ssh -- ls                    | mount-start-2-931300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-931300                           | mount-start-2-931300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| delete  | -p mount-start-1-859800                           | mount-start-1-859800 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| start   | -p multinode-093300                               | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- apply -f                   | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT | 20 May 24 05:06 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- rollout                    | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.155995549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.156012149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.156118350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.313838662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314175663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314265163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:02:28 multinode-093300 dockerd[1336]: time="2024-05-20T12:02:28.314435463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.314836916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.315487220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316184625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316419326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ffde8c3540f6d3237aaee7b7efe3fb67a2eaf2d46da1957d9f1398416fa886e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 12:06:46 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:46Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.812890560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813037260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813087160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813245260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         16 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         16 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         16 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         16 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                16m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:01:57.674791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.674924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.67506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgPreVoteResp from 35fa5479c1404576 at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.675121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.67515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgVoteResp from 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 35fa5479c1404576 elected leader 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.683796Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.68998Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"35fa5479c1404576","local-member-attributes":"{Name:multinode-093300 ClientURLs:[https://172.25.248.197:2379]}","request-path":"/0/members/35fa5479c1404576/attributes","cluster-id":"6de7b93236da1ce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T12:01:57.690259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.690793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.691358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.693751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.701267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.712542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.248.197:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.733534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6de7b93236da1ce","local-member-id":"35fa5479c1404576","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.738861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.739348Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:02:43.609464Z","caller":"traceutil/trace.go:171","msg":"trace[355698758] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"126.890272ms","start":"2024-05-20T12:02:43.482555Z","end":"2024-05-20T12:02:43.609446Z","steps":["trace[355698758] 'process raft request'  (duration: 126.74047ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:11:57.883212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":664}
	{"level":"info","ts":"2024-05-20T12:11:57.901107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":664,"took":"17.242145ms","hash":418129480,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T12:11:57.901416Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":418129480,"revision":664,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:16:57.900461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":904}
	{"level":"info","ts":"2024-05-20T12:16:57.908914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":904,"took":"7.825229ms","hash":2564373708,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:16:57.908964Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2564373708,"revision":904,"compact-revision":664}
	
	
	==> kernel <==
	 12:18:32 up 18 min,  0 users,  load average: 0.20, 0.21, 0.18
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:16:26.507060       1 main.go:227] handling current node
	I0520 12:16:36.519911       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:16:36.520100       1 main.go:227] handling current node
	I0520 12:16:46.529810       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:16:46.529912       1 main.go:227] handling current node
	I0520 12:16:56.536400       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:16:56.536506       1 main.go:227] handling current node
	I0520 12:17:06.546339       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:06.546502       1 main.go:227] handling current node
	I0520 12:17:16.558634       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:16.558677       1 main.go:227] handling current node
	I0520 12:17:26.572370       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:26.572502       1 main.go:227] handling current node
	I0520 12:17:36.586104       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:36.586201       1 main.go:227] handling current node
	I0520 12:17:46.594799       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:46.594897       1 main.go:227] handling current node
	I0520 12:17:56.600477       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:56.600570       1 main.go:227] handling current node
	I0520 12:18:06.611220       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:06.611432       1 main.go:227] handling current node
	I0520 12:18:16.619039       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:16.619133       1 main.go:227] handling current node
	I0520 12:18:26.626065       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:26.626112       1 main.go:227] handling current node
	
	
	==> kube-apiserver [477e3df15a9c] <==
	E0520 12:01:59.694281       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0520 12:01:59.902619       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:15.915786       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 12:02:15.925962       1 shared_informer.go:320] Caches are synced for PVC protection
	I0520 12:02:15.939786       1 shared_informer.go:320] Caches are synced for expand
	I0520 12:02:15.949136       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 12:02:15.950501       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:02:15.982781       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 12:02:16.379630       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.379657       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 12:02:16.417564       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.906228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="303.284225ms"
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:14:02 multinode-093300 kubelet[2141]: E0520 12:14:02.789760    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:14:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:14:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:14:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:14:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:15:02 multinode-093300 kubelet[2141]: E0520 12:15:02.780797    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:15:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:15:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:15:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:15:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:16:02 multinode-093300 kubelet[2141]: E0520 12:16:02.778304    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:16:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:16:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:16:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:16:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:17:02 multinode-093300 kubelet[2141]: E0520 12:17:02.778419    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:17:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:17:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:17:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:17:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:18:02 multinode-093300 kubelet[2141]: E0520 12:18:02.777994    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:18:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:18:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:18:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:18:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [2842c911dbc8] <==
	I0520 12:02:28.399856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:02:28.434390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:02:28.436460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:02:28.452812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:02:28.453576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	I0520 12:02:28.454925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"378535d4-051f-4c38-8167-adef61b820bc", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3 became leader
	I0520 12:02:28.557085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:18:24.344767   14360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.7437573s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-ncmp8
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-093300 describe pod busybox-fc5497c4f-ncmp8
helpers_test.go:282: (dbg) kubectl --context multinode-093300 describe pod busybox-fc5497c4f-ncmp8:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-ncmp8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nqwgc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-nqwgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  103s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (723.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (47.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-ncmp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (332.5965ms)

                                                
                                                
** stderr ** 
	W0520 05:18:47.706625   14596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-ncmp8 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-ncmp8 could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- sh -c "ping -c 1 172.25.240.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-093300 -- exec busybox-fc5497c4f-rk7lk -- sh -c "ping -c 1 172.25.240.1": exit status 1 (10.4469293s)

                                                
                                                
-- stdout --
	PING 172.25.240.1 (172.25.240.1): 56 data bytes
	
	--- 172.25.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:18:48.466336    1084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.240.1) from pod (busybox-fc5497c4f-rk7lk): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.8691115s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.816737s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-859800                           | mount-start-1-859800 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| start   | -p multinode-093300                               | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- apply -f                   | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT | 20 May 24 05:06 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- rollout                    | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300     | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-rk7lk -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:29 multinode-093300 dockerd[1329]: 2024/05/20 12:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.314836916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.315487220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316184625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316419326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ffde8c3540f6d3237aaee7b7efe3fb67a2eaf2d46da1957d9f1398416fa886e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 12:06:46 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:46Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.812890560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813037260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813087160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813245260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              16 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         17 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         17 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         17 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         17 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	[INFO] 10.244.0.3:32981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320401s
	[INFO] 10.244.0.3:49440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000250601s
	[INFO] 10.244.0.3:54411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000254101s
	[INFO] 10.244.0.3:44358 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000269301s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:19:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:17:20 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                16m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:01:57.674791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.674924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.67506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgPreVoteResp from 35fa5479c1404576 at term 1"}
	{"level":"info","ts":"2024-05-20T12:01:57.675121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.67515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 received MsgVoteResp from 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35fa5479c1404576 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.675398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 35fa5479c1404576 elected leader 35fa5479c1404576 at term 2"}
	{"level":"info","ts":"2024-05-20T12:01:57.683796Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.68998Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"35fa5479c1404576","local-member-attributes":"{Name:multinode-093300 ClientURLs:[https://172.25.248.197:2379]}","request-path":"/0/members/35fa5479c1404576/attributes","cluster-id":"6de7b93236da1ce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T12:01:57.690259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.690793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T12:01:57.691358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.693751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T12:01:57.701267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.712542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.248.197:2379"}
	{"level":"info","ts":"2024-05-20T12:01:57.733534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6de7b93236da1ce","local-member-id":"35fa5479c1404576","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.738861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:01:57.739348Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:02:43.609464Z","caller":"traceutil/trace.go:171","msg":"trace[355698758] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"126.890272ms","start":"2024-05-20T12:02:43.482555Z","end":"2024-05-20T12:02:43.609446Z","steps":["trace[355698758] 'process raft request'  (duration: 126.74047ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:11:57.883212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":664}
	{"level":"info","ts":"2024-05-20T12:11:57.901107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":664,"took":"17.242145ms","hash":418129480,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T12:11:57.901416Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":418129480,"revision":664,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:16:57.900461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":904}
	{"level":"info","ts":"2024-05-20T12:16:57.908914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":904,"took":"7.825229ms","hash":2564373708,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:16:57.908964Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2564373708,"revision":904,"compact-revision":664}
	
	
	==> kernel <==
	 12:19:20 up 19 min,  0 users,  load average: 0.26, 0.23, 0.19
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:17:16.558677       1 main.go:227] handling current node
	I0520 12:17:26.572370       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:26.572502       1 main.go:227] handling current node
	I0520 12:17:36.586104       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:36.586201       1 main.go:227] handling current node
	I0520 12:17:46.594799       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:46.594897       1 main.go:227] handling current node
	I0520 12:17:56.600477       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:17:56.600570       1 main.go:227] handling current node
	I0520 12:18:06.611220       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:06.611432       1 main.go:227] handling current node
	I0520 12:18:16.619039       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:16.619133       1 main.go:227] handling current node
	I0520 12:18:26.626065       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:26.626112       1 main.go:227] handling current node
	I0520 12:18:36.642634       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:36.642880       1 main.go:227] handling current node
	I0520 12:18:46.659414       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:46.659441       1 main.go:227] handling current node
	I0520 12:18:56.678196       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:18:56.678305       1 main.go:227] handling current node
	I0520 12:19:06.687508       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:19:06.687971       1 main.go:227] handling current node
	I0520 12:19:16.693275       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:19:16.693432       1 main.go:227] handling current node
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	E0520 12:18:48.326152       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62602: use of closed network connection
	E0520 12:18:58.782603       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62604: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:15.915786       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 12:02:15.925962       1 shared_informer.go:320] Caches are synced for PVC protection
	I0520 12:02:15.939786       1 shared_informer.go:320] Caches are synced for expand
	I0520 12:02:15.949136       1 shared_informer.go:320] Caches are synced for persistent volume
	I0520 12:02:15.950501       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 12:02:15.982781       1 shared_informer.go:320] Caches are synced for attach detach
	I0520 12:02:16.379630       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.379657       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 12:02:16.417564       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.906228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="303.284225ms"
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:15:02 multinode-093300 kubelet[2141]: E0520 12:15:02.780797    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:15:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:15:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:15:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:15:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:16:02 multinode-093300 kubelet[2141]: E0520 12:16:02.778304    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:16:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:16:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:16:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:16:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:17:02 multinode-093300 kubelet[2141]: E0520 12:17:02.778419    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:17:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:17:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:17:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:17:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:18:02 multinode-093300 kubelet[2141]: E0520 12:18:02.777994    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:18:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:18:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:18:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:18:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:19:02 multinode-093300 kubelet[2141]: E0520 12:19:02.782956    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:19:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:19:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:19:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:19:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [2842c911dbc8] <==
	I0520 12:02:28.399856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 12:02:28.434390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 12:02:28.436460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 12:02:28.452812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 12:02:28.453576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	I0520 12:02:28.454925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"378535d4-051f-4c38-8167-adef61b820bc", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3 became leader
	I0520 12:02:28.557085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-093300_0a8f60a1-3515-4090-8a50-2774d90669b3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:19:11.784662    9284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.7156376s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-ncmp8
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-093300 describe pod busybox-fc5497c4f-ncmp8
helpers_test.go:282: (dbg) kubectl --context multinode-093300 describe pod busybox-fc5497c4f-ncmp8:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-ncmp8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nqwgc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-nqwgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m30s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (47.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (276.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-093300 -v 3 --alsologtostderr
E0520 05:20:25.060542    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-093300 -v 3 --alsologtostderr: (3m23.1927335s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr
E0520 05:23:04.576959    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:23:28.299077    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr: exit status 2 (37.5001767s)

                                                
                                                
-- stdout --
	multinode-093300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-093300-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-093300-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:22:58.008907   14316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 05:22:58.016900   14316 out.go:291] Setting OutFile to fd 1984 ...
	I0520 05:22:58.018149   14316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:22:58.018149   14316 out.go:304] Setting ErrFile to fd 1180...
	I0520 05:22:58.018149   14316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:22:58.034999   14316 out.go:298] Setting JSON to false
	I0520 05:22:58.034999   14316 mustload.go:65] Loading cluster: multinode-093300
	I0520 05:22:58.034999   14316 notify.go:220] Checking for updates...
	I0520 05:22:58.034999   14316 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:22:58.036001   14316 status.go:255] checking status of multinode-093300 ...
	I0520 05:22:58.036001   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:23:00.389282   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:00.389282   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:00.389282   14316 status.go:330] multinode-093300 host status = "Running" (err=<nil>)
	I0520 05:23:00.389282   14316 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:23:00.390231   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:23:02.693263   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:02.693263   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:02.694283   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:05.401434   14316 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:23:05.402284   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:05.402284   14316 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:23:05.418746   14316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:23:05.418746   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:23:07.649911   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:07.649911   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:07.650014   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:10.328524   14316 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:23:10.328582   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:10.328582   14316 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:23:10.433258   14316 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0144994s)
	I0520 05:23:10.448298   14316 ssh_runner.go:195] Run: systemctl --version
	I0520 05:23:10.469754   14316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:23:10.495257   14316 kubeconfig.go:125] found "multinode-093300" server: "https://172.25.248.197:8443"
	I0520 05:23:10.495257   14316 api_server.go:166] Checking apiserver status ...
	I0520 05:23:10.507581   14316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:23:10.546877   14316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup
	W0520 05:23:10.564943   14316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 05:23:10.578559   14316 ssh_runner.go:195] Run: ls
	I0520 05:23:10.585898   14316 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:23:10.594144   14316 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:23:10.594144   14316 status.go:422] multinode-093300 apiserver status = Running (err=<nil>)
	I0520 05:23:10.594144   14316 status.go:257] multinode-093300 status: &{Name:multinode-093300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:23:10.594803   14316 status.go:255] checking status of multinode-093300-m02 ...
	I0520 05:23:10.595405   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:23:12.852126   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:12.853359   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:12.853359   14316 status.go:330] multinode-093300-m02 host status = "Running" (err=<nil>)
	I0520 05:23:12.853411   14316 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:23:12.854225   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:23:15.122531   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:15.122531   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:15.123329   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:17.795876   14316 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:23:17.796393   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:17.796393   14316 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:23:17.809823   14316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:23:17.809823   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:23:20.043627   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:20.043627   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:20.044194   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:22.715869   14316 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:23:22.715869   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:22.716772   14316 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:23:22.823253   14316 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0133388s)
	I0520 05:23:22.841342   14316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:23:22.868137   14316 status.go:257] multinode-093300-m02 status: &{Name:multinode-093300-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:23:22.868193   14316 status.go:255] checking status of multinode-093300-m03 ...
	I0520 05:23:22.868824   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:23:25.156138   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:25.156138   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:25.156138   14316 status.go:330] multinode-093300-m03 host status = "Running" (err=<nil>)
	I0520 05:23:25.157140   14316 host.go:66] Checking if "multinode-093300-m03" exists ...
	I0520 05:23:25.158022   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:23:27.477784   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:27.477784   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:27.478046   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:30.217227   14316 main.go:141] libmachine: [stdout =====>] : 172.25.250.168
	
	I0520 05:23:30.217227   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:30.217227   14316 host.go:66] Checking if "multinode-093300-m03" exists ...
	I0520 05:23:30.231583   14316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:23:30.231583   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:23:32.526002   14316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:23:32.526756   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:32.526878   14316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:23:35.220342   14316 main.go:141] libmachine: [stdout =====>] : 172.25.250.168
	
	I0520 05:23:35.220342   14316 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:23:35.220934   14316 sshutil.go:53] new ssh client: &{IP:172.25.250.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m03\id_rsa Username:docker}
	I0520 05:23:35.327629   14316 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0959489s)
	I0520 05:23:35.341192   14316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:23:35.368046   14316 status.go:257] multinode-093300-m03 status: &{Name:multinode-093300-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.7654244s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.817738s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-093300                               | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- apply -f                   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT | 20 May 24 05:06 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- rollout                    | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-rk7lk -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-093300 -v 3                      | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:19 PDT | 20 May 24 05:22 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.315487220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316184625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:45.316419326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:45 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ffde8c3540f6d3237aaee7b7efe3fb67a2eaf2d46da1957d9f1398416fa886e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 12:06:46 multinode-093300 cri-dockerd[1234]: time="2024-05-20T12:06:46Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.812890560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813037260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813087160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813245260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:19 multinode-093300 dockerd[1329]: 2024/05/20 12:19:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         21 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         22 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         22 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         22 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	[INFO] 10.244.0.3:32981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320401s
	[INFO] 10.244.0.3:49440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000250601s
	[INFO] 10.244.0.3:54411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000254101s
	[INFO] 10.244.0.3:44358 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000269301s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                21m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	Name:               multinode-093300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T05_22_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:23:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.250.168
	  Hostname:    multinode-093300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f1736c8bff04fb69e3451244d381888
	  System UUID:                8c66bb4f-dce2-f44a-be67-ef9ccca5596c
	  Boot ID:                    aa950763-894a-47de-9417-30ddee9d31ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ncmp8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-cjqrv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      83s
	  kube-system                 kube-proxy-8b6tx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x2 over 84s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x2 over 84s)  kubelet          Node multinode-093300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x2 over 84s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                node-controller  Node multinode-093300-m03 event: Registered Node multinode-093300-m03 in Controller
	  Normal  NodeReady                60s                kubelet          Node multinode-093300-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:01:57.739348Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:02:43.609464Z","caller":"traceutil/trace.go:171","msg":"trace[355698758] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"126.890272ms","start":"2024-05-20T12:02:43.482555Z","end":"2024-05-20T12:02:43.609446Z","steps":["trace[355698758] 'process raft request'  (duration: 126.74047ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:11:57.883212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":664}
	{"level":"info","ts":"2024-05-20T12:11:57.901107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":664,"took":"17.242145ms","hash":418129480,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T12:11:57.901416Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":418129480,"revision":664,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:16:57.900461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":904}
	{"level":"info","ts":"2024-05-20T12:16:57.908914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":904,"took":"7.825229ms","hash":2564373708,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:16:57.908964Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2564373708,"revision":904,"compact-revision":664}
	{"level":"info","ts":"2024-05-20T12:19:52.417856Z","caller":"traceutil/trace.go:171","msg":"trace[275574744] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"147.641704ms","start":"2024-05-20T12:19:52.270178Z","end":"2024-05-20T12:19:52.41782Z","steps":["trace[275574744] 'process raft request'  (duration: 146.882501ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:19:59.564967Z","caller":"traceutil/trace.go:171","msg":"trace[1817921994] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"164.914676ms","start":"2024-05-20T12:19:59.400035Z","end":"2024-05-20T12:19:59.56495Z","steps":["trace[1817921994] 'process raft request'  (duration: 164.802576ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:20:50.11914Z","caller":"traceutil/trace.go:171","msg":"trace[625224998] transaction","detail":"{read_only:false; response_revision:1331; number_of_response:1; }","duration":"100.017619ms","start":"2024-05-20T12:20:50.019102Z","end":"2024-05-20T12:20:50.119119Z","steps":["trace[625224998] 'process raft request'  (duration: 99.793918ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:21:57.916879Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2024-05-20T12:21:57.924994Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1145,"took":"7.832034ms","hash":2574517761,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:21:57.925085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2574517761,"revision":1145,"compact-revision":904}
	{"level":"info","ts":"2024-05-20T12:22:25.736809Z","caller":"traceutil/trace.go:171","msg":"trace[430372741] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"224.491074ms","start":"2024-05-20T12:22:25.512281Z","end":"2024-05-20T12:22:25.736772Z","steps":["trace[430372741] 'process raft request'  (duration: 224.253073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:25.974125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.558296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:25.974225Z","caller":"traceutil/trace.go:171","msg":"trace[1439624153] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"206.703098ms","start":"2024-05-20T12:22:25.767508Z","end":"2024-05-20T12:22:25.974212Z","steps":["trace[1439624153] 'range keys from in-memory index tree'  (duration: 206.506896ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:26.864539Z","caller":"traceutil/trace.go:171","msg":"trace[1459107816] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"150.383153ms","start":"2024-05-20T12:22:26.714135Z","end":"2024-05-20T12:22:26.864518Z","steps":["trace[1459107816] 'process raft request'  (duration: 150.225653ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:43.639207Z","caller":"traceutil/trace.go:171","msg":"trace[1481916495] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"159.576496ms","start":"2024-05-20T12:22:43.479611Z","end":"2024-05-20T12:22:43.639188Z","steps":["trace[1481916495] 'process raft request'  (duration: 159.463096ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.777887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.881564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1542137351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1462; }","duration":"427.186365ms","start":"2024-05-20T12:22:44.350923Z","end":"2024-05-20T12:22:44.778109Z","steps":["trace[1542137351] 'range keys from in-memory index tree'  (duration: 426.694864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.394969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-093300-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-05-20T12:22:44.778786Z","caller":"traceutil/trace.go:171","msg":"trace[755691261] range","detail":"{range_begin:/registry/minions/multinode-093300-m03; range_end:; response_count:1; response_revision:1462; }","duration":"336.839571ms","start":"2024-05-20T12:22:44.441934Z","end":"2024-05-20T12:22:44.778774Z","steps":["trace[755691261] 'range keys from in-memory index tree'  (duration: 336.219968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.441838Z","time spent":"336.975772ms","remote":"127.0.0.1:55370","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3171,"request content":"key:\"/registry/minions/multinode-093300-m03\" "}
	{"level":"warn","ts":"2024-05-20T12:22:44.778433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.35091Z","time spent":"427.511667ms","remote":"127.0.0.1:55230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 12:23:56 up 24 min,  0 users,  load average: 0.64, 0.36, 0.24
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:22:46.916298       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:22:56.928092       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:22:56.928222       1 main.go:227] handling current node
	I0520 12:22:56.928237       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:22:56.928245       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:23:06.935382       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:23:06.935501       1 main.go:227] handling current node
	I0520 12:23:06.935515       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:23:06.935523       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:23:16.943400       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:23:16.943545       1 main.go:227] handling current node
	I0520 12:23:16.943646       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:23:16.943658       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:23:26.949865       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:23:26.950070       1 main.go:227] handling current node
	I0520 12:23:26.950105       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:23:26.950140       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:23:36.961006       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:23:36.961616       1 main.go:227] handling current node
	I0520 12:23:36.961720       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:23:36.961779       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:23:46.976931       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:23:46.976975       1 main.go:227] handling current node
	I0520 12:23:46.976988       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:23:46.976995       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	E0520 12:18:48.326152       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62602: use of closed network connection
	E0520 12:18:58.782603       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62604: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:16.417564       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.906228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="303.284225ms"
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	I0520 12:22:33.084385       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-093300-m03\" does not exist"
	I0520 12:22:33.104885       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-093300-m03" podCIDRs=["10.244.1.0/24"]
	I0520 12:22:35.968109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-093300-m03"
	I0520 12:22:56.341095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-093300-m03"
	I0520 12:22:56.368042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.3µs"
	I0520 12:22:56.389258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.3µs"
	I0520 12:22:59.571331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.348641ms"
	I0520 12:22:59.572056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.6µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:19:02 multinode-093300 kubelet[2141]: E0520 12:19:02.782956    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:19:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:19:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:19:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:19:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:20:02 multinode-093300 kubelet[2141]: E0520 12:20:02.780327    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:20:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:20:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:20:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:20:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:21:02 multinode-093300 kubelet[2141]: E0520 12:21:02.778436    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:21:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:21:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:21:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:21:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:22:02 multinode-093300 kubelet[2141]: E0520 12:22:02.780074    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:22:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:22:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:22:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:22:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:23:02 multinode-093300 kubelet[2141]: E0520 12:23:02.780285    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:23:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:23:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:23:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:23:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:23:48.273018    6612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.8384894s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (276.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (73.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status --output json --alsologtostderr: exit status 2 (37.7487823s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-093300","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-093300-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-093300-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:24:21.780417   14448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 05:24:21.787749   14448 out.go:291] Setting OutFile to fd 1740 ...
	I0520 05:24:21.788785   14448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:24:21.788785   14448 out.go:304] Setting ErrFile to fd 1088...
	I0520 05:24:21.788785   14448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:24:21.809622   14448 out.go:298] Setting JSON to true
	I0520 05:24:21.809622   14448 mustload.go:65] Loading cluster: multinode-093300
	I0520 05:24:21.809622   14448 notify.go:220] Checking for updates...
	I0520 05:24:21.810362   14448 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:24:21.810362   14448 status.go:255] checking status of multinode-093300 ...
	I0520 05:24:21.811161   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:24:24.182889   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:24.182977   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:24.182977   14448 status.go:330] multinode-093300 host status = "Running" (err=<nil>)
	I0520 05:24:24.183067   14448 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:24:24.183757   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:24:26.525570   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:26.526020   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:26.526119   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:29.274602   14448 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:24:29.274679   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:29.274679   14448 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:24:29.287527   14448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:24:29.288524   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:24:31.571547   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:31.572298   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:31.572298   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:34.308666   14448 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:24:34.309590   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:34.309845   14448 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:24:34.420458   14448 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1328562s)
	I0520 05:24:34.434213   14448 ssh_runner.go:195] Run: systemctl --version
	I0520 05:24:34.462568   14448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:24:34.491916   14448 kubeconfig.go:125] found "multinode-093300" server: "https://172.25.248.197:8443"
	I0520 05:24:34.491916   14448 api_server.go:166] Checking apiserver status ...
	I0520 05:24:34.506263   14448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:24:34.548807   14448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup
	W0520 05:24:34.566848   14448 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 05:24:34.578651   14448 ssh_runner.go:195] Run: ls
	I0520 05:24:34.590117   14448 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:24:34.597172   14448 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:24:34.597172   14448 status.go:422] multinode-093300 apiserver status = Running (err=<nil>)
	I0520 05:24:34.597172   14448 status.go:257] multinode-093300 status: &{Name:multinode-093300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:24:34.597642   14448 status.go:255] checking status of multinode-093300-m02 ...
	I0520 05:24:34.597924   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:24:36.864298   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:36.864298   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:36.865310   14448 status.go:330] multinode-093300-m02 host status = "Running" (err=<nil>)
	I0520 05:24:36.865310   14448 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:24:36.865401   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:24:39.153219   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:39.153219   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:39.154192   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:41.885156   14448 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:24:41.885156   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:41.885891   14448 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:24:41.900624   14448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:24:41.900624   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:24:44.179367   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:44.179367   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:44.179495   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:46.908951   14448 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:24:46.908951   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:46.909466   14448 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:24:47.008721   14448 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1080839s)
	I0520 05:24:47.021860   14448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:24:47.047786   14448 status.go:257] multinode-093300-m02 status: &{Name:multinode-093300-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:24:47.047939   14448 status.go:255] checking status of multinode-093300-m03 ...
	I0520 05:24:47.048627   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:24:49.353284   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:49.353284   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:49.353284   14448 status.go:330] multinode-093300-m03 host status = "Running" (err=<nil>)
	I0520 05:24:49.353284   14448 host.go:66] Checking if "multinode-093300-m03" exists ...
	I0520 05:24:49.354200   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:24:51.634192   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:51.634192   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:51.635062   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:54.329218   14448 main.go:141] libmachine: [stdout =====>] : 172.25.250.168
	
	I0520 05:24:54.329218   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:54.330046   14448 host.go:66] Checking if "multinode-093300-m03" exists ...
	I0520 05:24:54.346076   14448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:24:54.346076   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:24:56.603045   14448 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:24:56.603045   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:56.603189   14448 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:24:59.247636   14448 main.go:141] libmachine: [stdout =====>] : 172.25.250.168
	
	I0520 05:24:59.247636   14448 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:24:59.248401   14448 sshutil.go:53] new ssh client: &{IP:172.25.250.168 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m03\id_rsa Username:docker}
	I0520 05:24:59.354460   14448 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0083723s)
	I0520 05:24:59.367967   14448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:24:59.398548   14448 status.go:257] multinode-093300-m03 status: &{Name:multinode-093300-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-093300 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.7198437s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.7307865s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-093300                               | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- apply -f                   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT | 20 May 24 05:06 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- rollout                    | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-rk7lk -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-093300 -v 3                      | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:19 PDT | 20 May 24 05:22 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813087160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:06:46 multinode-093300 dockerd[1336]: time="2024-05-20T12:06:46.813245260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:19 multinode-093300 dockerd[1329]: 2024/05/20 12:19:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              22 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         23 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         23 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         23 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         23 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	[INFO] 10.244.0.3:32981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320401s
	[INFO] 10.244.0.3:49440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000250601s
	[INFO] 10.244.0.3:54411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000254101s
	[INFO] 10.244.0.3:44358 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000269301s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:25:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23m   kube-proxy       
	  Normal  Starting                 23m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                22m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	Name:               multinode-093300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T05_22_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:22:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.250.168
	  Hostname:    multinode-093300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f1736c8bff04fb69e3451244d381888
	  System UUID:                8c66bb4f-dce2-f44a-be67-ef9ccca5596c
	  Boot ID:                    aa950763-894a-47de-9417-30ddee9d31ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ncmp8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-cjqrv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m47s
	  kube-system                 kube-proxy-8b6tx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m47s (x2 over 2m48s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x2 over 2m48s)  kubelet          Node multinode-093300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x2 over 2m48s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m45s                  node-controller  Node multinode-093300-m03 event: Registered Node multinode-093300-m03 in Controller
	  Normal  NodeReady                2m24s                  kubelet          Node multinode-093300-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:01:57.739348Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T12:02:43.609464Z","caller":"traceutil/trace.go:171","msg":"trace[355698758] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"126.890272ms","start":"2024-05-20T12:02:43.482555Z","end":"2024-05-20T12:02:43.609446Z","steps":["trace[355698758] 'process raft request'  (duration: 126.74047ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:11:57.883212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":664}
	{"level":"info","ts":"2024-05-20T12:11:57.901107Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":664,"took":"17.242145ms","hash":418129480,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T12:11:57.901416Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":418129480,"revision":664,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:16:57.900461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":904}
	{"level":"info","ts":"2024-05-20T12:16:57.908914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":904,"took":"7.825229ms","hash":2564373708,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:16:57.908964Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2564373708,"revision":904,"compact-revision":664}
	{"level":"info","ts":"2024-05-20T12:19:52.417856Z","caller":"traceutil/trace.go:171","msg":"trace[275574744] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"147.641704ms","start":"2024-05-20T12:19:52.270178Z","end":"2024-05-20T12:19:52.41782Z","steps":["trace[275574744] 'process raft request'  (duration: 146.882501ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:19:59.564967Z","caller":"traceutil/trace.go:171","msg":"trace[1817921994] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"164.914676ms","start":"2024-05-20T12:19:59.400035Z","end":"2024-05-20T12:19:59.56495Z","steps":["trace[1817921994] 'process raft request'  (duration: 164.802576ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:20:50.11914Z","caller":"traceutil/trace.go:171","msg":"trace[625224998] transaction","detail":"{read_only:false; response_revision:1331; number_of_response:1; }","duration":"100.017619ms","start":"2024-05-20T12:20:50.019102Z","end":"2024-05-20T12:20:50.119119Z","steps":["trace[625224998] 'process raft request'  (duration: 99.793918ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:21:57.916879Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2024-05-20T12:21:57.924994Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1145,"took":"7.832034ms","hash":2574517761,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:21:57.925085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2574517761,"revision":1145,"compact-revision":904}
	{"level":"info","ts":"2024-05-20T12:22:25.736809Z","caller":"traceutil/trace.go:171","msg":"trace[430372741] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"224.491074ms","start":"2024-05-20T12:22:25.512281Z","end":"2024-05-20T12:22:25.736772Z","steps":["trace[430372741] 'process raft request'  (duration: 224.253073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:25.974125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.558296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:25.974225Z","caller":"traceutil/trace.go:171","msg":"trace[1439624153] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"206.703098ms","start":"2024-05-20T12:22:25.767508Z","end":"2024-05-20T12:22:25.974212Z","steps":["trace[1439624153] 'range keys from in-memory index tree'  (duration: 206.506896ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:26.864539Z","caller":"traceutil/trace.go:171","msg":"trace[1459107816] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"150.383153ms","start":"2024-05-20T12:22:26.714135Z","end":"2024-05-20T12:22:26.864518Z","steps":["trace[1459107816] 'process raft request'  (duration: 150.225653ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:43.639207Z","caller":"traceutil/trace.go:171","msg":"trace[1481916495] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"159.576496ms","start":"2024-05-20T12:22:43.479611Z","end":"2024-05-20T12:22:43.639188Z","steps":["trace[1481916495] 'process raft request'  (duration: 159.463096ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.777887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.881564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1542137351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1462; }","duration":"427.186365ms","start":"2024-05-20T12:22:44.350923Z","end":"2024-05-20T12:22:44.778109Z","steps":["trace[1542137351] 'range keys from in-memory index tree'  (duration: 426.694864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.394969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-093300-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-05-20T12:22:44.778786Z","caller":"traceutil/trace.go:171","msg":"trace[755691261] range","detail":"{range_begin:/registry/minions/multinode-093300-m03; range_end:; response_count:1; response_revision:1462; }","duration":"336.839571ms","start":"2024-05-20T12:22:44.441934Z","end":"2024-05-20T12:22:44.778774Z","steps":["trace[755691261] 'range keys from in-memory index tree'  (duration: 336.219968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.441838Z","time spent":"336.975772ms","remote":"127.0.0.1:55370","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3171,"request content":"key:\"/registry/minions/multinode-093300-m03\" "}
	{"level":"warn","ts":"2024-05-20T12:22:44.778433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.35091Z","time spent":"427.511667ms","remote":"127.0.0.1:55230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 12:25:20 up 25 min,  0 users,  load average: 0.20, 0.28, 0.22
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:24:17.008426       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:24:27.015630       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:24:27.015775       1 main.go:227] handling current node
	I0520 12:24:27.015792       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:24:27.015801       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:24:37.028955       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:24:37.029066       1 main.go:227] handling current node
	I0520 12:24:37.029082       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:24:37.029090       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:24:47.043522       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:24:47.043616       1 main.go:227] handling current node
	I0520 12:24:47.043629       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:24:47.043637       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:24:57.049669       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:24:57.049923       1 main.go:227] handling current node
	I0520 12:24:57.049937       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:24:57.049945       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:25:07.065197       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:25:07.065308       1 main.go:227] handling current node
	I0520 12:25:07.065324       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:25:07.065332       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:25:17.081705       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:25:17.082373       1 main.go:227] handling current node
	I0520 12:25:17.082516       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:25:17.082532       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	E0520 12:18:48.326152       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62602: use of closed network connection
	E0520 12:18:58.782603       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62604: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:16.417564       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 12:02:16.906228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="303.284225ms"
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	I0520 12:22:33.084385       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-093300-m03\" does not exist"
	I0520 12:22:33.104885       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-093300-m03" podCIDRs=["10.244.1.0/24"]
	I0520 12:22:35.968109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-093300-m03"
	I0520 12:22:56.341095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-093300-m03"
	I0520 12:22:56.368042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.3µs"
	I0520 12:22:56.389258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.3µs"
	I0520 12:22:59.571331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.348641ms"
	I0520 12:22:59.572056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.6µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:21:02 multinode-093300 kubelet[2141]: E0520 12:21:02.778436    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:21:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:21:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:21:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:21:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:22:02 multinode-093300 kubelet[2141]: E0520 12:22:02.780074    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:22:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:22:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:22:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:22:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:23:02 multinode-093300 kubelet[2141]: E0520 12:23:02.780285    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:23:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:23:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:23:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:23:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:24:02 multinode-093300 kubelet[2141]: E0520 12:24:02.779491    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:24:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:24:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:24:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:24:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:25:02 multinode-093300 kubelet[2141]: E0520 12:25:02.778935    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:25:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:25:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:25:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:25:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:25:12.263584    8940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
E0520 05:25:25.054604    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.680488s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (73.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (124.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 node stop m03: (34.8682086s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status: exit status 7 (27.2044639s)

                                                
                                                
-- stdout --
	multinode-093300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-093300-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-093300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:26:09.898927   14376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr: exit status 7 (27.1860317s)

                                                
                                                
-- stdout --
	multinode-093300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-093300-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-093300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:26:37.114408    9496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 05:26:37.125070    9496 out.go:291] Setting OutFile to fd 2044 ...
	I0520 05:26:37.126155    9496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:26:37.126216    9496 out.go:304] Setting ErrFile to fd 1788...
	I0520 05:26:37.126216    9496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:26:37.140928    9496 out.go:298] Setting JSON to false
	I0520 05:26:37.140928    9496 mustload.go:65] Loading cluster: multinode-093300
	I0520 05:26:37.140928    9496 notify.go:220] Checking for updates...
	I0520 05:26:37.141949    9496 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:26:37.141949    9496 status.go:255] checking status of multinode-093300 ...
	I0520 05:26:37.143050    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:26:39.445310    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:39.445310    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:39.445310    9496 status.go:330] multinode-093300 host status = "Running" (err=<nil>)
	I0520 05:26:39.445310    9496 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:26:39.446127    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:26:41.737804    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:41.737878    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:41.738032    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:26:44.416126    9496 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:26:44.416126    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:44.416259    9496 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:26:44.428854    9496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:26:44.428854    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:26:46.644605    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:46.644605    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:46.644605    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:26:49.319900    9496 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:26:49.319957    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:49.319957    9496 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:26:49.429735    9496 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0007934s)
	I0520 05:26:49.442597    9496 ssh_runner.go:195] Run: systemctl --version
	I0520 05:26:49.467583    9496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:26:49.497611    9496 kubeconfig.go:125] found "multinode-093300" server: "https://172.25.248.197:8443"
	I0520 05:26:49.497681    9496 api_server.go:166] Checking apiserver status ...
	I0520 05:26:49.510957    9496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:26:49.549137    9496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup
	W0520 05:26:49.566522    9496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 05:26:49.579794    9496 ssh_runner.go:195] Run: ls
	I0520 05:26:49.588331    9496 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:26:49.595263    9496 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:26:49.595263    9496 status.go:422] multinode-093300 apiserver status = Running (err=<nil>)
	I0520 05:26:49.595263    9496 status.go:257] multinode-093300 status: &{Name:multinode-093300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:26:49.595263    9496 status.go:255] checking status of multinode-093300-m02 ...
	I0520 05:26:49.597125    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:26:51.815216    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:51.815216    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:51.816153    9496 status.go:330] multinode-093300-m02 host status = "Running" (err=<nil>)
	I0520 05:26:51.816153    9496 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:26:51.816925    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:26:54.069154    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:54.069154    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:54.069270    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:26:56.742442    9496 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:26:56.743216    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:56.743216    9496 host.go:66] Checking if "multinode-093300-m02" exists ...
	I0520 05:26:56.757870    9496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 05:26:56.758594    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:26:59.068684    9496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:26:59.068855    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:26:59.068855    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:27:01.768284    9496 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:27:01.768284    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:01.768996    9496 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:27:01.877515    9496 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1196329s)
	I0520 05:27:01.891228    9496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:27:01.920703    9496 status.go:257] multinode-093300-m02 status: &{Name:multinode-093300-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 05:27:01.920814    9496 status.go:255] checking status of multinode-093300-m03 ...
	I0520 05:27:01.921726    9496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:27:04.156671    9496 main.go:141] libmachine: [stdout =====>] : Off
	
	I0520 05:27:04.156821    9496 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:04.156821    9496 status.go:330] multinode-093300-m03 host status = "Stopped" (err=<nil>)
	I0520 05:27:04.156949    9496 status.go:343] host is not running, skipping remaining checks
	I0520 05:27:04.156949    9496 status.go:257] multinode-093300-m03 status: &{Name:multinode-093300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr": multinode-093300
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-093300-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-093300-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-093300 status --alsologtostderr": multinode-093300
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-093300-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-093300-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.7619432s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.7708518s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-093300 -- apply -f                   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT | 20 May 24 05:06 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- rollout                    | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o                | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec                       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-rk7lk -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-093300 -v 3                      | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:19 PDT | 20 May 24 05:22 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-093300 node stop m03                    | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:25 PDT | 20 May 24 05:26 PDT |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:18:32 multinode-093300 dockerd[1329]: 2024/05/20 12:18:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:19 multinode-093300 dockerd[1329]: 2024/05/20 12:19:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         25 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         25 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         25 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         25 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	[INFO] 10.244.0.3:32981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320401s
	[INFO] 10.244.0.3:49440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000250601s
	[INFO] 10.244.0.3:54411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000254101s
	[INFO] 10.244.0.3:44358 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000269301s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:27:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:22:26 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25m   kube-proxy       
	  Normal  Starting                 25m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                24m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	Name:               multinode-093300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T05_22_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:22:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:25:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.250.168
	  Hostname:    multinode-093300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f1736c8bff04fb69e3451244d381888
	  System UUID:                8c66bb4f-dce2-f44a-be67-ef9ccca5596c
	  Boot ID:                    aa950763-894a-47de-9417-30ddee9d31ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ncmp8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-cjqrv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m52s
	  kube-system                 kube-proxy-8b6tx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m52s (x2 over 4m53s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x2 over 4m53s)  kubelet          Node multinode-093300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x2 over 4m53s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m50s                  node-controller  Node multinode-093300-m03 event: Registered Node multinode-093300-m03 in Controller
	  Normal  NodeReady                4m29s                  kubelet          Node multinode-093300-m03 status is now: NodeReady
	  Normal  NodeNotReady             54s                    node-controller  Node multinode-093300-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:21:57.924994Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1145,"took":"7.832034ms","hash":2574517761,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:21:57.925085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2574517761,"revision":1145,"compact-revision":904}
	{"level":"info","ts":"2024-05-20T12:22:25.736809Z","caller":"traceutil/trace.go:171","msg":"trace[430372741] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"224.491074ms","start":"2024-05-20T12:22:25.512281Z","end":"2024-05-20T12:22:25.736772Z","steps":["trace[430372741] 'process raft request'  (duration: 224.253073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:25.974125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.558296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:25.974225Z","caller":"traceutil/trace.go:171","msg":"trace[1439624153] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"206.703098ms","start":"2024-05-20T12:22:25.767508Z","end":"2024-05-20T12:22:25.974212Z","steps":["trace[1439624153] 'range keys from in-memory index tree'  (duration: 206.506896ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:26.864539Z","caller":"traceutil/trace.go:171","msg":"trace[1459107816] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"150.383153ms","start":"2024-05-20T12:22:26.714135Z","end":"2024-05-20T12:22:26.864518Z","steps":["trace[1459107816] 'process raft request'  (duration: 150.225653ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:43.639207Z","caller":"traceutil/trace.go:171","msg":"trace[1481916495] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"159.576496ms","start":"2024-05-20T12:22:43.479611Z","end":"2024-05-20T12:22:43.639188Z","steps":["trace[1481916495] 'process raft request'  (duration: 159.463096ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.777887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.881564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1542137351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1462; }","duration":"427.186365ms","start":"2024-05-20T12:22:44.350923Z","end":"2024-05-20T12:22:44.778109Z","steps":["trace[1542137351] 'range keys from in-memory index tree'  (duration: 426.694864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.394969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-093300-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-05-20T12:22:44.778786Z","caller":"traceutil/trace.go:171","msg":"trace[755691261] range","detail":"{range_begin:/registry/minions/multinode-093300-m03; range_end:; response_count:1; response_revision:1462; }","duration":"336.839571ms","start":"2024-05-20T12:22:44.441934Z","end":"2024-05-20T12:22:44.778774Z","steps":["trace[755691261] 'range keys from in-memory index tree'  (duration: 336.219968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.441838Z","time spent":"336.975772ms","remote":"127.0.0.1:55370","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3171,"request content":"key:\"/registry/minions/multinode-093300-m03\" "}
	{"level":"warn","ts":"2024-05-20T12:22:44.778433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.35091Z","time spent":"427.511667ms","remote":"127.0.0.1:55230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-20T12:26:05.359811Z","caller":"traceutil/trace.go:171","msg":"trace[671699277] transaction","detail":"{read_only:false; response_revision:1666; number_of_response:1; }","duration":"210.939268ms","start":"2024-05-20T12:26:05.148857Z","end":"2024-05-20T12:26:05.359796Z","steps":["trace[671699277] 'process raft request'  (duration: 210.462066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:26:05.36072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.492967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:26:05.360999Z","caller":"traceutil/trace.go:171","msg":"trace[2035467320] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:1666; }","duration":"101.814068ms","start":"2024-05-20T12:26:05.259175Z","end":"2024-05-20T12:26:05.360989Z","steps":["trace[2035467320] 'agreement among raft nodes before linearized reading'  (duration: 101.445266ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:05.359487Z","caller":"traceutil/trace.go:171","msg":"trace[1720821549] linearizableReadLoop","detail":"{readStateIndex:1976; appliedIndex:1975; }","duration":"100.19136ms","start":"2024-05-20T12:26:05.259278Z","end":"2024-05-20T12:26:05.359469Z","steps":["trace[1720821549] 'read index received'  (duration: 99.991659ms)","trace[1720821549] 'applied index is now lower than readState.Index'  (duration: 199.101µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:26:07.16433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.913412ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5005345909760475429 > lease_revoke:<id:45768f95e13a40e5>","response":"size:27"}
	{"level":"info","ts":"2024-05-20T12:26:10.873662Z","caller":"traceutil/trace.go:171","msg":"trace[962951199] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"194.958196ms","start":"2024-05-20T12:26:10.678684Z","end":"2024-05-20T12:26:10.873642Z","steps":["trace[962951199] 'process raft request'  (duration: 194.799695ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:10.875059Z","caller":"traceutil/trace.go:171","msg":"trace[118346474] linearizableReadLoop","detail":"{readStateIndex:1980; appliedIndex:1980; }","duration":"117.296039ms","start":"2024-05-20T12:26:10.75769Z","end":"2024-05-20T12:26:10.874986Z","steps":["trace[118346474] 'read index received'  (duration: 117.291539ms)","trace[118346474] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:26:10.87526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.58774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:26:10.877017Z","caller":"traceutil/trace.go:171","msg":"trace[447856641] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1669; }","duration":"119.378848ms","start":"2024-05-20T12:26:10.757626Z","end":"2024-05-20T12:26:10.877005Z","steps":["trace[447856641] 'agreement among raft nodes before linearized reading'  (duration: 117.51034ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:57.941625Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1386}
	{"level":"info","ts":"2024-05-20T12:26:57.950449Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1386,"took":"7.872136ms","hash":2122880162,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1708032,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-05-20T12:26:57.950557Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2122880162,"revision":1386,"compact-revision":1145}
	
	
	==> kernel <==
	 12:27:25 up 27 min,  0 users,  load average: 0.62, 0.37, 0.26
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:26:17.174864       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:26:27.182027       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:26:27.182072       1 main.go:227] handling current node
	I0520 12:26:27.182083       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:26:27.182090       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:26:37.193605       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:26:37.193753       1 main.go:227] handling current node
	I0520 12:26:37.193770       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:26:37.193778       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:26:47.207567       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:26:47.207703       1 main.go:227] handling current node
	I0520 12:26:47.207718       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:26:47.207761       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:26:57.214896       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:26:57.214994       1 main.go:227] handling current node
	I0520 12:26:57.215009       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:26:57.215017       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:27:07.232115       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:27:07.232627       1 main.go:227] handling current node
	I0520 12:27:07.232697       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:27:07.232906       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:27:17.243694       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:27:17.243897       1 main.go:227] handling current node
	I0520 12:27:17.243912       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:27:17.243920       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	E0520 12:18:48.326152       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62602: use of closed network connection
	E0520 12:18:58.782603       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62604: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	I0520 12:22:33.084385       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-093300-m03\" does not exist"
	I0520 12:22:33.104885       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-093300-m03" podCIDRs=["10.244.1.0/24"]
	I0520 12:22:35.968109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-093300-m03"
	I0520 12:22:56.341095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-093300-m03"
	I0520 12:22:56.368042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.3µs"
	I0520 12:22:56.389258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.3µs"
	I0520 12:22:59.571331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.348641ms"
	I0520 12:22:59.572056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.6µs"
	I0520 12:26:31.159518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.342304ms"
	I0520 12:26:31.162980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.801µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:23:02 multinode-093300 kubelet[2141]: E0520 12:23:02.780285    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:23:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:23:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:23:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:23:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:24:02 multinode-093300 kubelet[2141]: E0520 12:24:02.779491    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:24:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:24:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:24:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:24:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:25:02 multinode-093300 kubelet[2141]: E0520 12:25:02.778935    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:25:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:25:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:25:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:25:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:26:02 multinode-093300 kubelet[2141]: E0520 12:26:02.779246    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:26:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:26:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:26:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:26:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:27:02 multinode-093300 kubelet[2141]: E0520 12:27:02.791532    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:27:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:27:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:27:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:27:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:27:17.065600    7380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.6966757s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (124.79s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (145.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 node start m03 -v=7 --alsologtostderr
E0520 05:28:04.575607    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 node start m03 -v=7 --alsologtostderr: exit status 1 (1m3.017316s)

                                                
                                                
-- stdout --
	* Starting "multinode-093300-m03" worker node in "multinode-093300" cluster
	* Restarting existing hyperv VM for "multinode-093300-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:27:39.813971    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 05:27:39.821984    2168 out.go:291] Setting OutFile to fd 1052 ...
	I0520 05:27:39.837590    2168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:27:39.837590    2168 out.go:304] Setting ErrFile to fd 1936...
	I0520 05:27:39.837590    2168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:27:39.853274    2168 mustload.go:65] Loading cluster: multinode-093300
	I0520 05:27:39.853751    2168 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:27:39.854765    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:27:42.098585    2168 main.go:141] libmachine: [stdout =====>] : Off
	
	I0520 05:27:42.098687    2168 main.go:141] libmachine: [stderr =====>] : 
	W0520 05:27:42.098764    2168 host.go:58] "multinode-093300-m03" host status: Stopped
	I0520 05:27:42.102062    2168 out.go:177] * Starting "multinode-093300-m03" worker node in "multinode-093300" cluster
	I0520 05:27:42.104393    2168 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:27:42.104562    2168 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 05:27:42.104562    2168 cache.go:56] Caching tarball of preloaded images
	I0520 05:27:42.105258    2168 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:27:42.105258    2168 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:27:42.105258    2168 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:27:42.107477    2168 start.go:360] acquireMachinesLock for multinode-093300-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:27:42.107477    2168 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300-m03"
	I0520 05:27:42.107477    2168 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:27:42.107477    2168 fix.go:54] fixHost starting: m03
	I0520 05:27:42.108531    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:27:44.364682    2168 main.go:141] libmachine: [stdout =====>] : Off
	
	I0520 05:27:44.365584    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:44.365688    2168 fix.go:112] recreateIfNeeded on multinode-093300-m03: state=Stopped err=<nil>
	W0520 05:27:44.365732    2168 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:27:44.368306    2168 out.go:177] * Restarting existing hyperv VM for "multinode-093300-m03" ...
	I0520 05:27:44.370582    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m03
	I0520 05:27:47.521993    2168 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:27:47.521993    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:47.521993    2168 main.go:141] libmachine: Waiting for host to start...
	I0520 05:27:47.522706    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:27:49.930124    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:27:49.930124    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:49.930804    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:27:52.617023    2168 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:27:52.617080    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:53.623927    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:27:56.002353    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:27:56.003361    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:56.003456    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:27:58.684226    2168 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:27:58.684226    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:27:59.687425    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:02.041313    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:02.041313    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:02.041313    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:04.735551    2168 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:28:04.735763    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:05.744816    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:08.147942    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:08.148673    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:08.148785    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:10.832941    2168 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:28:10.832941    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:11.845010    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:14.218857    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:14.218903    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:14.218903    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:16.911810    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:16.911810    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:16.915243    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:19.198622    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:19.198622    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:19.199384    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:21.915764    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:21.915764    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:21.915764    2168 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:28:21.918815    2168 machine.go:94] provisionDockerMachine start ...
	I0520 05:28:21.918996    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:24.179648    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:24.180178    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:24.180178    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:26.899570    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:26.899855    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:26.906607    2168 main.go:141] libmachine: Using SSH client type: native
	I0520 05:28:26.907254    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
	I0520 05:28:26.907254    2168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:28:27.051256    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:28:27.051386    2168 buildroot.go:166] provisioning hostname "multinode-093300-m03"
	I0520 05:28:27.051480    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:29.334790    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:29.335232    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:29.335457    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:32.038866    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:32.038866    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:32.045238    2168 main.go:141] libmachine: Using SSH client type: native
	I0520 05:28:32.045962    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
	I0520 05:28:32.045962    2168 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m03 && echo "multinode-093300-m03" | sudo tee /etc/hostname
	I0520 05:28:32.208122    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m03
	
	I0520 05:28:32.208122    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:34.568818    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:34.568856    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:34.569006    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:37.265322    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:37.265629    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:37.272010    2168 main.go:141] libmachine: Using SSH client type: native
	I0520 05:28:37.272735    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
	I0520 05:28:37.272735    2168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:28:37.431303    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:28:37.431426    2168 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:28:37.431501    2168 buildroot.go:174] setting up certificates
	I0520 05:28:37.431555    2168 provision.go:84] configureAuth start
	I0520 05:28:37.431652    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
	I0520 05:28:39.685581    2168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:28:39.686417    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:39.686514    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
	I0520 05:28:42.332520    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119
	
	I0520 05:28:42.332595    2168 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:28:42.332595    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:284: W0520 05:27:39.813971    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 05:27:39.821984    2168 out.go:291] Setting OutFile to fd 1052 ...
I0520 05:27:39.837590    2168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 05:27:39.837590    2168 out.go:304] Setting ErrFile to fd 1936...
I0520 05:27:39.837590    2168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 05:27:39.853274    2168 mustload.go:65] Loading cluster: multinode-093300
I0520 05:27:39.853751    2168 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 05:27:39.854765    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:27:42.098585    2168 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0520 05:27:42.098687    2168 main.go:141] libmachine: [stderr =====>] : 
W0520 05:27:42.098764    2168 host.go:58] "multinode-093300-m03" host status: Stopped
I0520 05:27:42.102062    2168 out.go:177] * Starting "multinode-093300-m03" worker node in "multinode-093300" cluster
I0520 05:27:42.104393    2168 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 05:27:42.104562    2168 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0520 05:27:42.104562    2168 cache.go:56] Caching tarball of preloaded images
I0520 05:27:42.105258    2168 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0520 05:27:42.105258    2168 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 05:27:42.105258    2168 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
I0520 05:27:42.107477    2168 start.go:360] acquireMachinesLock for multinode-093300-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 05:27:42.107477    2168 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300-m03"
I0520 05:27:42.107477    2168 start.go:96] Skipping create...Using existing machine configuration
I0520 05:27:42.107477    2168 fix.go:54] fixHost starting: m03
I0520 05:27:42.108531    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:27:44.364682    2168 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0520 05:27:44.365584    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:44.365688    2168 fix.go:112] recreateIfNeeded on multinode-093300-m03: state=Stopped err=<nil>
W0520 05:27:44.365732    2168 fix.go:138] unexpected machine state, will restart: <nil>
I0520 05:27:44.368306    2168 out.go:177] * Restarting existing hyperv VM for "multinode-093300-m03" ...
I0520 05:27:44.370582    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m03
I0520 05:27:47.521993    2168 main.go:141] libmachine: [stdout =====>] : 
I0520 05:27:47.521993    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:47.521993    2168 main.go:141] libmachine: Waiting for host to start...
I0520 05:27:47.522706    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:27:49.930124    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:27:49.930124    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:49.930804    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:27:52.617023    2168 main.go:141] libmachine: [stdout =====>] : 
I0520 05:27:52.617080    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:53.623927    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:27:56.002353    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:27:56.003361    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:56.003456    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:27:58.684226    2168 main.go:141] libmachine: [stdout =====>] : 
I0520 05:27:58.684226    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:27:59.687425    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:02.041313    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:02.041313    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:02.041313    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:04.735551    2168 main.go:141] libmachine: [stdout =====>] : 
I0520 05:28:04.735763    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:05.744816    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:08.147942    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:08.148673    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:08.148785    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:10.832941    2168 main.go:141] libmachine: [stdout =====>] : 
I0520 05:28:10.832941    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:11.845010    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:14.218857    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:14.218903    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:14.218903    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:16.911810    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:16.911810    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:16.915243    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:19.198622    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:19.198622    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:19.199384    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:21.915764    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:21.915764    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:21.915764    2168 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
I0520 05:28:21.918815    2168 machine.go:94] provisionDockerMachine start ...
I0520 05:28:21.918996    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:24.179648    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:24.180178    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:24.180178    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:26.899570    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:26.899855    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:26.906607    2168 main.go:141] libmachine: Using SSH client type: native
I0520 05:28:26.907254    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
I0520 05:28:26.907254    2168 main.go:141] libmachine: About to run SSH command:
hostname
I0520 05:28:27.051256    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0520 05:28:27.051386    2168 buildroot.go:166] provisioning hostname "multinode-093300-m03"
I0520 05:28:27.051480    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:29.334790    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:29.335232    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:29.335457    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:32.038866    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:32.038866    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:32.045238    2168 main.go:141] libmachine: Using SSH client type: native
I0520 05:28:32.045962    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
I0520 05:28:32.045962    2168 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-093300-m03 && echo "multinode-093300-m03" | sudo tee /etc/hostname
I0520 05:28:32.208122    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m03

                                                
                                                
I0520 05:28:32.208122    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:34.568818    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:34.568856    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:34.569006    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:37.265322    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:37.265629    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:37.272010    2168 main.go:141] libmachine: Using SSH client type: native
I0520 05:28:37.272735    2168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.252.119 22 <nil> <nil>}
I0520 05:28:37.272735    2168 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-093300-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-093300-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0520 05:28:37.431303    2168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0520 05:28:37.431426    2168 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0520 05:28:37.431501    2168 buildroot.go:174] setting up certificates
I0520 05:28:37.431555    2168 provision.go:84] configureAuth start
I0520 05:28:37.431652    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
I0520 05:28:39.685581    2168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 05:28:39.686417    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:39.686514    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m03 ).networkadapters[0]).ipaddresses[0]
I0520 05:28:42.332520    2168 main.go:141] libmachine: [stdout =====>] : 172.25.252.119

                                                
                                                
I0520 05:28:42.332595    2168 main.go:141] libmachine: [stderr =====>] : 
I0520 05:28:42.332595    2168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m03 ).state
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-093300 node start m03 -v=7 --alsologtostderr": exit status 1
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
E0520 05:29:27.813323    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-093300 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-093300 -n multinode-093300: (12.710871s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-093300 logs -n 25: (8.7976376s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-093300 -- rollout       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:06 PDT |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:16 PDT | 20 May 24 05:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:17 PDT | 20 May 24 05:17 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8 -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- get pods -o   | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-ncmp8              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT | 20 May 24 05:18 PDT |
	|         | busybox-fc5497c4f-rk7lk              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-093300 -- exec          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:18 PDT |                     |
	|         | busybox-fc5497c4f-rk7lk -- sh        |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.240.1            |                  |                   |         |                     |                     |
	| node    | add -p multinode-093300 -v 3         | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:19 PDT | 20 May 24 05:22 PDT |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-093300 node stop m03       | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:25 PDT | 20 May 24 05:26 PDT |
	| node    | multinode-093300 node start          | multinode-093300 | minikube1\jenkins | v1.33.1 | 20 May 24 05:27 PDT |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:42
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:42.815010    4324 out.go:291] Setting OutFile to fd 620 ...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.816241    4324 out.go:304] Setting ErrFile to fd 1160...
	I0520 04:58:42.816241    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:42.840692    4324 out.go:298] Setting JSON to false
	I0520 04:58:42.844724    4324 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6319,"bootTime":1716200003,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 04:58:42.844724    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:42.850600    4324 out.go:177] * [multinode-093300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 04:58:42.854189    4324 notify.go:220] Checking for updates...
	I0520 04:58:42.856471    4324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 04:58:42.862039    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:42.864450    4324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 04:58:42.866808    4324 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 04:58:42.869028    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:42.871898    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:58:42.872846    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:48.504436    4324 out.go:177] * Using the hyperv driver based on user configuration
	I0520 04:58:48.508034    4324 start.go:297] selected driver: hyperv
	I0520 04:58:48.508107    4324 start.go:901] validating driver "hyperv" against <nil>
	I0520 04:58:48.508107    4324 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:48.559327    4324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:58:48.560423    4324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:58:48.560423    4324 cni.go:84] Creating CNI manager for ""
	I0520 04:58:48.560423    4324 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:58:48.560423    4324 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:58:48.560423    4324 start.go:340] cluster config:
	{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:58:48.561748    4324 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:48.566491    4324 out.go:177] * Starting "multinode-093300" primary control-plane node in "multinode-093300" cluster
	I0520 04:58:48.569074    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:58:48.569207    4324 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 04:58:48.569207    4324 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:48.569207    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 04:58:48.569820    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:58:48.569972    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 04:58:48.569972    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json: {Name:mkb5ce383bfa3083c5b214eca315256a3f3cd6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:360] acquireMachinesLock for multinode-093300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:48.571347    4324 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-093300"
	I0520 04:58:48.571347    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:58:48.571347    4324 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 04:58:48.576086    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:58:48.576086    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 04:58:48.576086    4324 client.go:168] LocalClient.Create starting
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 04:58:48.576086    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: Parsing certificate...
	I0520 04:58:48.577357    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:50.713683    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:52.516140    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:58:54.094569    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:58:54.094778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:54.094892    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:58:57.937675    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:58:57.938251    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:58:57.940823    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 04:58:58.453971    4324 main.go:141] libmachine: Creating SSH key...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: Creating VM...
	I0520 04:58:59.375881    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 04:59:02.421468    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 04:59:02.421705    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:02.421872    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 04:59:02.421994    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 04:59:04.241436    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:04.242412    4324 main.go:141] libmachine: Creating VHD
	I0520 04:59:04.242447    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 04:59:08.102294    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 291869B2-7278-42A2-A3CC-0F234FDB1077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 04:59:08.102369    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:08.102369    4324 main.go:141] libmachine: Writing magic tar header
	I0520 04:59:08.102485    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 04:59:08.112101    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 04:59:11.377183    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:11.377578    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:11.377633    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd' -SizeBytes 20000MB
	I0520 04:59:14.044673    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:14.044820    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 04:59:17.787493    4324 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-093300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 04:59:17.787768    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:17.787865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300 -DynamicMemoryEnabled $false
	I0520 04:59:20.101636    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:20.102292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:20.102364    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300 -Count 2
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:22.424135    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:22.424624    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\boot2docker.iso'
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:25.116899    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\disk.vhd'
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:27.883587    4324 main.go:141] libmachine: Starting VM...
	I0520 04:59:27.883587    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:31.087366    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 04:59:31.087466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:33.493675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:33.493717    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:33.493866    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:36.207280    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:36.207512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:37.213839    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:39.591092    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:39.591821    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:42.290411    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:43.298312    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:45.591020    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:45.591357    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:45.591428    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:48.288658    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:49.293849    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:51.640445    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:51.641469    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 04:59:54.279103    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:55.285718    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 04:59:57.660938    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 04:59:57.661172    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:00.367863    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:00.368672    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:02.641802    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:02.641927    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:02.642010    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:00:02.642155    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:04.898847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:04.899077    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:04.899159    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:07.557793    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:07.558272    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:07.567350    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:07.577325    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:07.578325    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:00:07.719330    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:00:07.719330    4324 buildroot.go:166] provisioning hostname "multinode-093300"
	I0520 05:00:07.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:09.948376    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:09.949087    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:09.949220    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:12.583471    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:12.584146    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:12.591999    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:12.591999    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:12.591999    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300 && echo "multinode-093300" | sudo tee /etc/hostname
	I0520 05:00:12.765697    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300
	
	I0520 05:00:12.765697    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:15.007583    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:15.007675    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:17.644774    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:17.651208    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:17.651778    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:17.651935    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:00:17.813002    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:00:17.813132    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:00:17.813132    4324 buildroot.go:174] setting up certificates
	I0520 05:00:17.813132    4324 provision.go:84] configureAuth start
	I0520 05:00:17.813132    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:20.030935    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:20.031563    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:22.718059    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:22.718326    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:24.937706    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:24.938150    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:27.665494    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:27.665726    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:27.665726    4324 provision.go:143] copyHostCerts
	I0520 05:00:27.665726    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:00:27.665726    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:00:27.665726    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:00:27.666778    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:00:27.667834    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:00:27.667994    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:00:27.667994    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:00:27.669343    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:00:27.669413    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:00:27.669413    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:00:27.669941    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:00:27.671135    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300 san=[127.0.0.1 172.25.248.197 localhost minikube multinode-093300]
	I0520 05:00:27.842841    4324 provision.go:177] copyRemoteCerts
	I0520 05:00:27.856315    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:00:27.856473    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:30.134879    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:30.135137    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:32.834462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:32.834796    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:32.958180    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1016037s)
	I0520 05:00:32.958180    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:00:32.958509    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:00:33.009329    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:00:33.009786    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 05:00:33.061375    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:00:33.061375    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:00:33.109459    4324 provision.go:87] duration metric: took 15.2962924s to configureAuth
	I0520 05:00:33.109459    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:00:33.110608    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:00:33.110726    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:35.340624    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:35.340715    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:35.340838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:38.009321    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:38.019168    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:38.019168    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:38.019750    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:00:38.162280    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:00:38.162280    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:00:38.162906    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:00:38.162906    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:40.372836    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:40.372951    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:43.028582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:43.036892    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:43.036892    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:43.036892    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:00:43.209189    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:00:43.209390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:45.440823    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:45.441335    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:48.106107    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:48.112128    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:00:48.112311    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:00:48.112311    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:00:50.250004    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:00:50.250134    4324 machine.go:97] duration metric: took 47.6080162s to provisionDockerMachine
	I0520 05:00:50.250213    4324 client.go:171] duration metric: took 2m1.6738486s to LocalClient.Create
	I0520 05:00:50.250213    4324 start.go:167] duration metric: took 2m1.6738486s to libmachine.API.Create "multinode-093300"
	I0520 05:00:50.250270    4324 start.go:293] postStartSetup for "multinode-093300" (driver="hyperv")
	I0520 05:00:50.250347    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:00:50.264103    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:00:50.264103    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:52.502474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:52.502956    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:00:55.171346    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:55.171731    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:00:55.292090    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0279067s)
	I0520 05:00:55.306342    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:00:55.312478    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:00:55.312546    4324 command_runner.go:130] > ID=buildroot
	I0520 05:00:55.312546    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:00:55.312546    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:00:55.312616    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:00:55.312715    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:00:55.312802    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:00:55.314228    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:00:55.314228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:00:55.330759    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:00:55.350089    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:00:55.393489    4324 start.go:296] duration metric: took 5.1431299s for postStartSetup
	I0520 05:00:55.396815    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:00:57.623600    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:00:57.624571    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:00.323281    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:00.323398    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:00.323556    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:01:00.326678    4324 start.go:128] duration metric: took 2m11.7550307s to createHost
	I0520 05:01:00.326865    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:02.576657    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:02.577370    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:02.577671    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:05.277488    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:05.284650    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:05.284864    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:05.284864    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206465.433808737
	
	I0520 05:01:05.429095    4324 fix.go:216] guest clock: 1716206465.433808737
	I0520 05:01:05.429095    4324 fix.go:229] Guest: 2024-05-20 05:01:05.433808737 -0700 PDT Remote: 2024-05-20 05:01:00.3267747 -0700 PDT m=+137.597009301 (delta=5.107034037s)
	I0520 05:01:05.429095    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:07.698603    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:07.698682    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:07.698757    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:10.386778    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:10.394083    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:01:10.394255    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.248.197 22 <nil> <nil>}
	I0520 05:01:10.394255    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206465
	I0520 05:01:10.543168    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:01:05 UTC 2024
	
	I0520 05:01:10.543168    4324 fix.go:236] clock set: Mon May 20 12:01:05 UTC 2024
	 (err=<nil>)
	I0520 05:01:10.543168    4324 start.go:83] releasing machines lock for "multinode-093300", held for 2m21.971498s
	I0520 05:01:10.543953    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:12.785675    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:12.785791    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:15.466419    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:15.466474    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:15.472046    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:01:15.472046    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:15.482838    4324 ssh_runner.go:195] Run: cat /version.json
	I0520 05:01:15.482838    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.792507    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.792604    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:17.795785    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.609270    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.609641    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:01:20.637468    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:01:20.638268    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:01:20.836539    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:01:20.836539    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3644799s)
	I0520 05:01:20.836755    4324 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	I0520 05:01:20.836847    4324 ssh_runner.go:235] Completed: cat /version.json: (5.3539043s)
	W0520 05:01:20.837157    4324 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:01:20.854048    4324 ssh_runner.go:195] Run: systemctl --version
	I0520 05:01:20.864811    4324 command_runner.go:130] > systemd 252 (252)
	I0520 05:01:20.864811    4324 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 05:01:20.876285    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:01:20.884648    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 05:01:20.885730    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:01:20.897213    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:01:20.926448    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:01:20.926448    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:01:20.926586    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:20.926840    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:20.961714    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:01:20.977711    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:01:21.013913    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:01:21.034768    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:01:21.055193    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:01:21.089853    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.124215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:01:21.158177    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:01:21.195917    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:01:21.229096    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:01:21.260386    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:01:21.293943    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:01:21.327963    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:01:21.347397    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:01:21.361783    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:01:21.392774    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:21.598542    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:01:21.637461    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:01:21.650160    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Unit]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:01:21.672238    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:01:21.672238    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:01:21.672238    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:01:21.672238    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:01:21.672238    4324 command_runner.go:130] > [Service]
	I0520 05:01:21.672238    4324 command_runner.go:130] > Type=notify
	I0520 05:01:21.672238    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:01:21.672238    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:01:21.672238    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:01:21.672238    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:01:21.672238    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:01:21.672238    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:01:21.672238    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:01:21.672238    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:01:21.673193    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=
	I0520 05:01:21.673193    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:01:21.673272    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:01:21.673272    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:01:21.673272    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:01:21.673342    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:01:21.673342    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:01:21.673342    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:01:21.673342    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:01:21.673342    4324 command_runner.go:130] > Delegate=yes
	I0520 05:01:21.673409    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:01:21.673409    4324 command_runner.go:130] > KillMode=process
	I0520 05:01:21.673409    4324 command_runner.go:130] > [Install]
	I0520 05:01:21.673409    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:01:21.687690    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.722276    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:01:21.773701    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:01:21.810158    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.844051    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:01:21.909678    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:01:21.933173    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:01:21.967868    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:01:21.981215    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:01:21.987552    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:01:22.002259    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:01:22.020741    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:01:22.065262    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:01:22.285713    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:01:22.490486    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:01:22.490688    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:01:22.535392    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:22.744190    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:25.280191    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5359959s)
	I0520 05:01:25.292183    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 05:01:25.336810    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:25.370725    4324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 05:01:25.575549    4324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 05:01:25.782162    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.001975    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 05:01:26.044858    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 05:01:26.083433    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:26.301690    4324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 05:01:26.409765    4324 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 05:01:26.425779    4324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 05:01:26.434577    4324 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0520 05:01:26.434693    4324 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 05:01:26.434775    4324 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0520 05:01:26.434775    4324 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0520 05:01:26.434821    4324 command_runner.go:130] > Access: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434845    4324 command_runner.go:130] > Modify: 2024-05-20 12:01:26.333291358 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] > Change: 2024-05-20 12:01:26.337291376 +0000
	I0520 05:01:26.434874    4324 command_runner.go:130] >  Birth: -
	I0520 05:01:26.434874    4324 start.go:562] Will wait 60s for crictl version
	I0520 05:01:26.447346    4324 ssh_runner.go:195] Run: which crictl
	I0520 05:01:26.452390    4324 command_runner.go:130] > /usr/bin/crictl
	I0520 05:01:26.466147    4324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 05:01:26.531780    4324 command_runner.go:130] > Version:  0.1.0
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeName:  docker
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0520 05:01:26.531780    4324 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 05:01:26.532353    4324 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 05:01:26.542344    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.573939    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.584653    4324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 05:01:26.621219    4324 command_runner.go:130] > 26.0.2
	I0520 05:01:26.625205    4324 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 05:01:26.625205    4324 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 05:01:26.629203    4324 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 05:01:26.632201    4324 ip.go:210] interface addr: 172.25.240.1/20
	I0520 05:01:26.647154    4324 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 05:01:26.654968    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:26.678731    4324 kubeadm.go:877] updating cluster {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 05:01:26.679252    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:01:26.688329    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:26.709358    4324 docker.go:685] Got preloaded images: 
	I0520 05:01:26.709358    4324 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0520 05:01:26.721315    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:26.740353    4324 command_runner.go:139] > {"Repositories":{}}
	I0520 05:01:26.752408    4324 ssh_runner.go:195] Run: which lz4
	I0520 05:01:26.760110    4324 command_runner.go:130] > /usr/bin/lz4
	I0520 05:01:26.760166    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 05:01:26.774597    4324 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 05:01:26.780503    4324 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781265    4324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 05:01:26.781575    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0520 05:01:28.831959    4324 docker.go:649] duration metric: took 2.0713779s to copy over tarball
	I0520 05:01:28.845119    4324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 05:01:42.898168    4324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (14.0529589s)
	I0520 05:01:42.898246    4324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 05:01:42.961297    4324 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 05:01:42.979516    4324 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0520 05:01:42.979516    4324 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0520 05:01:43.025142    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:43.232187    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:01:46.340034    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1072152s)
	I0520 05:01:46.347602    4324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 05:01:46.378072    4324 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:01:46.378658    4324 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:01:46.378731    4324 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0520 05:01:46.378731    4324 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:01:46.378811    4324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 05:01:46.378916    4324 cache_images.go:84] Images are preloaded, skipping loading
	I0520 05:01:46.378916    4324 kubeadm.go:928] updating node { 172.25.248.197 8443 v1.30.1 docker true true} ...
	I0520 05:01:46.379030    4324 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-093300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.248.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 05:01:46.389903    4324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 05:01:46.426774    4324 command_runner.go:130] > cgroupfs
	I0520 05:01:46.426774    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:01:46.426774    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:01:46.426774    4324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 05:01:46.426774    4324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.248.197 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-093300 NodeName:multinode-093300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.248.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.248.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 05:01:46.427750    4324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.248.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-093300"
	  kubeletExtraArgs:
	    node-ip: 172.25.248.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.248.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 05:01:46.437788    4324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubeadm
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubectl
	I0520 05:01:46.456766    4324 command_runner.go:130] > kubelet
	I0520 05:01:46.456766    4324 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 05:01:46.468762    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 05:01:46.488380    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 05:01:46.520098    4324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 05:01:46.550297    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0520 05:01:46.596423    4324 ssh_runner.go:195] Run: grep 172.25.248.197	control-plane.minikube.internal$ /etc/hosts
	I0520 05:01:46.603335    4324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.248.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 05:01:46.637601    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:01:46.844575    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:01:46.880421    4324 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300 for IP: 172.25.248.197
	I0520 05:01:46.880480    4324 certs.go:194] generating shared ca certs ...
	I0520 05:01:46.880480    4324 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:46.881024    4324 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 05:01:46.881439    4324 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 05:01:46.881677    4324 certs.go:256] generating profile certs ...
	I0520 05:01:46.882800    4324 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key
	I0520 05:01:46.883051    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt with IP's: []
	I0520 05:01:47.103021    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt ...
	I0520 05:01:47.103021    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.crt: {Name:mk58d73b9dc2281d7f157ffe4774c1f4f0fecb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.105028    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key ...
	I0520 05:01:47.105028    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\client.key: {Name:mk17b5a438282fac7be871025284b396ab3f53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.106049    4324 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102
	I0520 05:01:47.107025    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.248.197]
	I0520 05:01:47.481423    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 ...
	I0520 05:01:47.481423    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102: {Name:mkedd15ad66390b0277b6b97455babf608f59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483185    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 ...
	I0520 05:01:47.483185    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102: {Name:mke71bd5e0f385e9ba6e33e0c1f9bb7aa10e9276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.483816    4324 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt
	I0520 05:01:47.495038    4324 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key.645d0102 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key
	I0520 05:01:47.496339    4324 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key
	I0520 05:01:47.497396    4324 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt with IP's: []
	I0520 05:01:47.913597    4324 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt ...
	I0520 05:01:47.913597    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt: {Name:mk790d9d87ea15dd373c018a33346efcf5471ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.914449    4324 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key ...
	I0520 05:01:47.914449    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key: {Name:mkfc1d8e0440f65b464294b3e6a06ea8dc06e3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:01:47.915591    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 05:01:47.916550    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0520 05:01:47.916897    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 05:01:47.917064    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 05:01:47.917323    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 05:01:47.917499    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 05:01:47.917676    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 05:01:47.927613    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 05:01:47.927904    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 05:01:47.927904    4324 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 05:01:47.928586    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 05:01:47.928685    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 05:01:47.928976    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 05:01:47.929256    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 05:01:47.929492    4324 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 05:01:47.929492    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem -> /usr/share/ca-certificates/4100.pem
	I0520 05:01:47.930207    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /usr/share/ca-certificates/41002.pem
	I0520 05:01:47.931009    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 05:01:47.983102    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 05:01:48.023567    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 05:01:48.073417    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 05:01:48.117490    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 05:01:48.171432    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 05:01:48.218193    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 05:01:48.263514    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 05:01:48.306699    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 05:01:48.352131    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 05:01:48.396822    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 05:01:48.439360    4324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 05:01:48.488021    4324 ssh_runner.go:195] Run: openssl version
	I0520 05:01:48.497464    4324 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 05:01:48.513660    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 05:01:48.546683    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553561    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.553639    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.572303    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 05:01:48.580999    4324 command_runner.go:130] > b5213941
	I0520 05:01:48.595025    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 05:01:48.626998    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 05:01:48.659408    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665633    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.665828    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.680252    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 05:01:48.690087    4324 command_runner.go:130] > 51391683
	I0520 05:01:48.704031    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 05:01:48.739445    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 05:01:48.773393    4324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.781233    4324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.794391    4324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 05:01:48.803796    4324 command_runner.go:130] > 3ec20f2e
	I0520 05:01:48.819163    4324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 05:01:48.851154    4324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 05:01:48.857898    4324 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 05:01:48.858458    4324 kubeadm.go:391] StartCluster: {Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:01:48.869113    4324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 05:01:48.902631    4324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 05:01:48.930247    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0520 05:01:48.930408    4324 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0520 05:01:48.943409    4324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:01:48.990063    4324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0520 05:01:49.010189    4324 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:01:49.010189    4324 kubeadm.go:156] found existing configuration files:
	
	I0520 05:01:49.026646    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 05:01:49.044397    4324 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.045404    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:01:49.058854    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:01:49.091387    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 05:01:49.108810    4324 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.109707    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:01:49.121633    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:01:49.156566    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.173989    4324 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.173989    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:01:49.187572    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:01:49.216477    4324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 05:01:49.239108    4324 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.240604    4324 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:01:49.252996    4324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:01:49.273718    4324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:01:49.695339    4324 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:01:49.695453    4324 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:02:03.240278    4324 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241283    4324 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 05:02:03.241371    4324 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:03.241371    4324 command_runner.go:130] > [preflight] Running pre-flight checks
	I0520 05:02:03.241519    4324 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241519    4324 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:03.241771    4324 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241771    4324 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:03.241935    4324 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:03.241935    4324 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.241935    4324 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:03.244718    4324 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0520 05:02:03.244718    4324 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:03.244718    4324 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 05:02:03.245760    4324 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.245760    4324 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-093300] and IPs [172.25.248.197 127.0.0.1 ::1]
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 05:02:03.246689    4324 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0520 05:02:03.246689    4324 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:03.247689    4324 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:03.247689    4324 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.248681    4324 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:03.251675    4324 out.go:204]   - Booting up control plane ...
	I0520 05:02:03.251675    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.251675    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:03.252680    4324 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0520 05:02:03.252680    4324 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:03.253685    4324 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001860902s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 05:02:03.253685    4324 kubeadm.go:309] [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.253685    4324 command_runner.go:130] > [api-check] The API server is healthy after 6.502800776s
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:02:03.254700    4324 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:02:03.254700    4324 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.254700    4324 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:02:03.255741    4324 command_runner.go:130] > [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [mark-control-plane] Marking the node multinode-093300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:02:03.255741    4324 kubeadm.go:309] [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.255741    4324 command_runner.go:130] > [bootstrap-token] Using token: somuqs.h4yzg3rk2hezfv3h
	I0520 05:02:03.260685    4324 out.go:204]   - Configuring RBAC rules ...
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:02:03.260685    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.260685    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:02:03.261690    4324 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:02:03.261690    4324 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:02:03.261690    4324 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.261690    4324 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0520 05:02:03.261690    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:02:03.262682    4324 kubeadm.go:309] 
	I0520 05:02:03.262682    4324 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:02:03.262682    4324 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0520 05:02:03.263670    4324 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:02:03.263670    4324 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:02:03.263670    4324 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:02:03.263670    4324 kubeadm.go:309] 
	I0520 05:02:03.263670    4324 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.263670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a \
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--control-plane 
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:02:03.264670    4324 kubeadm.go:309] 
	I0520 05:02:03.264670    4324 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token somuqs.h4yzg3rk2hezfv3h \
	I0520 05:02:03.264670    4324 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f4472b706e40ff605ed2fb388ef779ef2e2dd8db9083d856969de77019c9230a 
	I0520 05:02:03.264670    4324 cni.go:84] Creating CNI manager for ""
	I0520 05:02:03.264670    4324 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 05:02:03.268712    4324 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 05:02:03.282673    4324 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 05:02:03.291591    4324 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0520 05:02:03.291651    4324 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0520 05:02:03.291651    4324 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 05:02:03.291651    4324 command_runner.go:130] > Access: 2024-05-20 11:59:56.435118000 +0000
	I0520 05:02:03.291651    4324 command_runner.go:130] > Modify: 2024-05-13 16:13:21.000000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] > Change: 2024-05-20 04:59:48.781000000 +0000
	I0520 05:02:03.291739    4324 command_runner.go:130] >  Birth: -
	I0520 05:02:03.291739    4324 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 05:02:03.291739    4324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 05:02:03.345466    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > serviceaccount/kindnet created
	I0520 05:02:03.729276    4324 command_runner.go:130] > daemonset.apps/kindnet created
	I0520 05:02:03.729276    4324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-093300 minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=multinode-093300 minikube.k8s.io/primary=true
	I0520 05:02:03.745588    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:03.768874    4324 command_runner.go:130] > -16
	I0520 05:02:03.769036    4324 ops.go:34] apiserver oom_adj: -16
	I0520 05:02:04.052833    4324 command_runner.go:130] > node/multinode-093300 labeled
	I0520 05:02:04.054834    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0520 05:02:04.069946    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.173567    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:04.579695    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:04.689494    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.083161    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.194808    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:05.588547    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:05.702113    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.084162    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.198825    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:06.569548    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:06.685635    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.069514    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.175321    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:07.584283    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:07.711925    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.071415    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.186754    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:08.569853    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:08.680941    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.071584    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.182593    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:09.584703    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:09.702241    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.083285    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.200975    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:10.572347    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:10.688167    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.075104    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.181832    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:11.575922    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:11.690008    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.080038    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.201679    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:12.578799    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:12.698997    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.084502    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.190392    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:13.573880    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:13.690078    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.076994    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.186559    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:14.583653    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:14.701084    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.082864    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.193609    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:15.582286    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:15.769156    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.076203    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.214810    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:16.570549    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:16.758184    4324 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0520 05:02:17.074892    4324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:02:17.190532    4324 command_runner.go:130] > NAME      SECRETS   AGE
	I0520 05:02:17.190532    4324 command_runner.go:130] > default   0         1s
	I0520 05:02:17.190532    4324 kubeadm.go:1107] duration metric: took 13.4612249s to wait for elevateKubeSystemPrivileges
	W0520 05:02:17.190532    4324 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:02:17.190532    4324 kubeadm.go:393] duration metric: took 28.3320081s to StartCluster
	I0520 05:02:17.190532    4324 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.190532    4324 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:17.193457    4324 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:02:17.194983    4324 start.go:234] Will wait 6m0s for node &{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:02:17.197814    4324 out.go:177] * Verifying Kubernetes components...
	I0520 05:02:17.195044    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 05:02:17.195044    4324 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:02:17.195680    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:17.201245    4324 addons.go:69] Setting storage-provisioner=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:69] Setting default-storageclass=true in profile "multinode-093300"
	I0520 05:02:17.201245    4324 addons.go:234] Setting addon storage-provisioner=true in "multinode-093300"
	I0520 05:02:17.201245    4324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-093300"
	I0520 05:02:17.201245    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:17.201995    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.202747    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:17.218079    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:02:17.385314    4324 command_runner.go:130] > apiVersion: v1
	I0520 05:02:17.385314    4324 command_runner.go:130] > data:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   Corefile: |
	I0520 05:02:17.385314    4324 command_runner.go:130] >     .:53 {
	I0520 05:02:17.385314    4324 command_runner.go:130] >         errors
	I0520 05:02:17.385314    4324 command_runner.go:130] >         health {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            lameduck 5s
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         ready
	I0520 05:02:17.385314    4324 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            pods insecure
	I0520 05:02:17.385314    4324 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0520 05:02:17.385314    4324 command_runner.go:130] >            ttl 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         prometheus :9153
	I0520 05:02:17.385314    4324 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0520 05:02:17.385314    4324 command_runner.go:130] >            max_concurrent 1000
	I0520 05:02:17.385314    4324 command_runner.go:130] >         }
	I0520 05:02:17.385314    4324 command_runner.go:130] >         cache 30
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loop
	I0520 05:02:17.385314    4324 command_runner.go:130] >         reload
	I0520 05:02:17.385314    4324 command_runner.go:130] >         loadbalance
	I0520 05:02:17.385314    4324 command_runner.go:130] >     }
	I0520 05:02:17.385314    4324 command_runner.go:130] > kind: ConfigMap
	I0520 05:02:17.385314    4324 command_runner.go:130] > metadata:
	I0520 05:02:17.385314    4324 command_runner.go:130] >   creationTimestamp: "2024-05-20T12:02:02Z"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   name: coredns
	I0520 05:02:17.385314    4324 command_runner.go:130] >   namespace: kube-system
	I0520 05:02:17.385314    4324 command_runner.go:130] >   resourceVersion: "225"
	I0520 05:02:17.385314    4324 command_runner.go:130] >   uid: ce617ae2-a3d1-49a2-b942-8644e13040ab
	I0520 05:02:17.385984    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 05:02:17.541458    4324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:02:18.029125    4324 command_runner.go:130] > configmap/coredns replaced
	I0520 05:02:18.029457    4324 start.go:946] {"host.minikube.internal": 172.25.240.1} host record injected into CoreDNS's ConfigMap
	I0520 05:02:18.030472    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.032241    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.032528    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:18.035015    4324 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 05:02:18.035662    4324 node_ready.go:35] waiting up to 6m0s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:18.036074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.036141    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.036209    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.036349    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.037681    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:18.038966    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.038966    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.038966    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.038966    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: a7c33986-aa1e-4dfe-8a48-9a82d85b3444
	I0520 05:02:18.056456    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Audit-Id: 45af799b-0559-4baa-a2d6-8814dee5e027
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.056456    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.056456    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.056456    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:18.057459    4324 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"361","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.057459    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.057459    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.057459    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:18.057459    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.096268    4324 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0520 05:02:18.096268    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.096268    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.096268    4324 round_trippers.go:580]     Audit-Id: 1661c56f-1c6e-4a05-acba-17449d56ee65
	I0520 05:02:18.096268    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"363","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.550946    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:18.550946    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:18.550946    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:18.550946    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:18.554959    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.554959    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555043    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Content-Length: 291
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Audit-Id: f60369bf-9251-45df-8141-9459a452cde1
	I0520 05:02:18.555043    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:18.555129    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:18.555129    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:18.555043    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555129    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:18 GMT
	I0520 05:02:18.555252    4324 round_trippers.go:580]     Audit-Id: cd1342f7-0be8-4e5f-a05e-e2fa2902928e
	I0520 05:02:18.555252    4324 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b3c8338-ee70-4507-ab4c-755c7efe5897","resourceVersion":"376","creationTimestamp":"2024-05-20T12:02:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0520 05:02:18.555336    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:18.555447    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:18.555480    4324 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-093300" context rescaled to 1 replicas
	I0520 05:02:18.555743    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.039773    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.039773    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.039773    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.039773    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.044631    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:19.044871    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Audit-Id: 3354480a-c067-4fd9-a86a-678d70e313af
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.044871    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.044871    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.044967    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.045781    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.548015    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:19.548077    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:19.548077    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:19.548077    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:19.551814    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:19.551971    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Audit-Id: 5c5d2fd4-54a1-4f4f-8c7b-dc8917d1a58f
	I0520 05:02:19.551971    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:19.552037    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:19.552037    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:19 GMT
	I0520 05:02:19.552037    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.632847    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.633093    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:19.633206    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:19.636790    4324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:02:19.634449    4324 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:02:19.638145    4324 kapi.go:59] client config for multinode-093300: &rest.Config{Host:"https://172.25.248.197:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-093300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:02:19.639186    4324 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:19.639186    4324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:02:19.639289    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:19.639782    4324 addons.go:234] Setting addon default-storageclass=true in "multinode-093300"
	I0520 05:02:19.640340    4324 host.go:66] Checking if "multinode-093300" exists ...
	I0520 05:02:19.641274    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:20.038344    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.038415    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.038415    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.038415    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.042012    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.042565    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.042565    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.042565    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.042654    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Audit-Id: 4e3768d1-f431-4fae-b065-9f7291789027
	I0520 05:02:20.042712    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.044445    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:20.045286    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:20.543336    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:20.543336    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:20.543336    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:20.543336    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:20.547135    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:20.547135    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Audit-Id: 626b4415-29e5-4829-89e7-0e59b0628c81
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:20.547135    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:20.547135    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:20 GMT
	I0520 05:02:20.547690    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.047884    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.047884    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.047884    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.047884    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.053057    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.053057    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Audit-Id: b99f4b7d-62c7-46ab-bfa2-58bb6776e9d7
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.053057    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.053057    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.053454    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:21.538679    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:21.538679    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:21.538679    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:21.538679    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:21.543683    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:21.543683    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:21.543683    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:21 GMT
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Audit-Id: 4a0d99c9-3b15-4cb5-b6ba-ff5fdde9a712
	I0520 05:02:21.543683    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:21.543870    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:21.543943    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.046464    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.046464    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.046464    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.046464    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.052292    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:22.052292    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.052548    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Audit-Id: 57b7ba29-d681-4e25-b966-d2c8e7670552
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.052548    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.053290    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:22.053290    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.118462    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:22.125334    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:22.125403    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:22.125466    4324 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:22.125507    4324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:02:22.125507    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300 ).state
	I0520 05:02:22.550066    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:22.550066    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:22.550066    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:22.550066    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:22.554352    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:22.554444    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Audit-Id: 8d3af6be-4fc0-427e-aa8d-27a3ec0ff41a
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:22.554536    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:22.554619    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:22.554619    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:22 GMT
	I0520 05:02:22.555650    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.045973    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.046184    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.046184    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.046184    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.051324    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:23.051324    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.051324    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Audit-Id: 05514910-d125-4c5a-951c-6f8a3fbe34f1
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.051324    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.051324    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:23.540729    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:23.540832    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:23.540832    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:23.540832    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:23.543473    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:23.544442    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Audit-Id: 2466b041-9dd7-44a6-a0bf-be23adcf19a1
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:23.544442    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:23.544442    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:23.544530    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:23.544530    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:23 GMT
	I0520 05:02:23.544964    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.050569    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.050633    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.050633    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.050689    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.061387    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:24.061547    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.061547    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.061547    4324 round_trippers.go:580]     Audit-Id: 9a25787f-a6b6-4eaa-9b96-580d3729d7ac
	I0520 05:02:24.062694    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.063485    4324 node_ready.go:53] node "multinode-093300" has status "Ready":"False"
	I0520 05:02:24.540475    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:24.540475    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:24.540551    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:24.540551    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:24.549066    4324 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 05:02:24.549066    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Audit-Id: 0c6e8057-2d0e-4664-b230-0d22d3eec781
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:24.549066    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:24.549066    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:24 GMT
	I0520 05:02:24.549066    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.559068    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300 ).networkadapters[0]).ipaddresses[0]
	I0520 05:02:24.992390    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:24.992959    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:24.993250    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:25.045154    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.045154    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.045154    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.045154    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.052810    4324 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 05:02:25.052897    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.052968    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.052968    4324 round_trippers.go:580]     Audit-Id: ca4eba38-c1a9-4e23-a9c5-bbd8401f6be6
	I0520 05:02:25.052968    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.143831    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:02:25.544074    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:25.544074    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:25.544074    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:25.544074    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:25.549651    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:25.549651    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:25.549897    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:25.549897    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:25 GMT
	I0520 05:02:25.549958    4324 round_trippers.go:580]     Audit-Id: 78f646a2-8d70-4397-ad01-88d0263e55dc
	I0520 05:02:25.550779    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:25.636454    4324 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0520 05:02:25.636454    4324 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0520 05:02:25.636454    4324 command_runner.go:130] > pod/storage-provisioner created
	I0520 05:02:26.037527    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.037527    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.037527    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.037527    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.058086    4324 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 05:02:26.058086    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Audit-Id: 0a301cd5-94a9-4ac0-bc5b-4de5cabb1ce6
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.058558    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.058558    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.058652    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"311","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0520 05:02:26.542270    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.542363    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.542363    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.542363    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.547718    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:26.547718    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.547718    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Audit-Id: 78a2261d-4714-4ee2-b3b9-bae1613021ea
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.547718    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.547718    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:26.548471    4324 node_ready.go:49] node "multinode-093300" has status "Ready":"True"
	I0520 05:02:26.548471    4324 node_ready.go:38] duration metric: took 8.5126926s for node "multinode-093300" to be "Ready" ...
	I0520 05:02:26.548471    4324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:26.549568    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:26.549568    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.549568    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.549568    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.553260    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.554242    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.554242    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.554330    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Audit-Id: e7f5b694-2ff1-46c5-9f15-b6ac27033665
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.554354    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.555826    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54695 chars]
	I0520 05:02:26.560435    4324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:26.561179    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:26.561210    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.561210    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.561248    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.572001    4324 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0520 05:02:26.572001    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.572001    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.572001    4324 round_trippers.go:580]     Audit-Id: c0bb60e2-c20a-4569-a2bf-65b0b2877877
	I0520 05:02:26.572939    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:26.572939    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:26.572939    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:26.572939    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:26.572939    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:26.576007    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:26.576965    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:26.576965    4324 round_trippers.go:580]     Audit-Id: c2425871-ea04-488b-98f7-77af3de3523b
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:26.577025    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:26.577025    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:26 GMT
	I0520 05:02:26.577226    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.063759    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.063759    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.063759    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.063759    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.067325    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.068288    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.068316    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Audit-Id: f2c4dba1-3773-4dcd-811e-91482e4338c8
	I0520 05:02:27.068316    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.068609    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.069319    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.069319    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.069319    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.069319    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.072878    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.072878    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.072878    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.072878    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.073584    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.073584    4324 round_trippers.go:580]     Audit-Id: 1c043b42-c504-4d9c-82b8-bbfe1c831246
	I0520 05:02:27.073651    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.328064    4324 main.go:141] libmachine: [stdout =====>] : 172.25.248.197
	
	I0520 05:02:27.329153    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:27.329396    4324 sshutil.go:53] new ssh client: &{IP:172.25.248.197 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300\id_rsa Username:docker}
	I0520 05:02:27.510274    4324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:02:27.570871    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:27.570871    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.570871    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.570871    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.573988    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.573988    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.573988    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Audit-Id: bb817d05-8e95-4f9b-a0de-6cd0270f357e
	I0520 05:02:27.573988    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.573988    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:27.575194    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:27.575194    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.575194    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.575194    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.577139    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:27.577139    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Audit-Id: a1a9b8e1-f68c-48e4-8a69-9003f461e53e
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.577139    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.577139    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.577708    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:27.709074    4324 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0520 05:02:27.710022    4324 round_trippers.go:463] GET https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 05:02:27.710022    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.710022    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.710022    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.713956    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:27.713956    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Length: 1273
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Audit-Id: 41a109ab-0bfb-4ae2-ba95-578635f6a52c
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.713956    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.713956    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.713956    4324 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0520 05:02:27.715397    4324 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.715484    4324 round_trippers.go:463] PUT https://172.25.248.197:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 05:02:27.715484    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:27.715484    4324 round_trippers.go:473]     Content-Type: application/json
	I0520 05:02:27.715484    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:27.719895    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:27.719895    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:27 GMT
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Audit-Id: 1d45fa3d-fff4-4afd-9014-8fca4f4e671b
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:27.719895    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:27.719895    4324 round_trippers.go:580]     Content-Length: 1220
	I0520 05:02:27.719895    4324 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ba8de2e-95e0-4c40-80bb-42967ce3e9a9","resourceVersion":"411","creationTimestamp":"2024-05-20T12:02:27Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-20T12:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0520 05:02:27.725619    4324 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 05:02:27.727518    4324 addons.go:505] duration metric: took 10.53245s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 05:02:28.063355    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.063355    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.063355    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.063355    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.067529    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.067577    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Audit-Id: e24eced3-4a2f-4bc0-9d52-1d33442fb0a0
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.067577    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.067577    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.067846    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.068705    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.068705    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.068783    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.068783    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.073120    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:28.073120    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Audit-Id: d4159e1a-1636-417a-9dbe-b57eb765f6f7
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.073120    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.073120    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.073946    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.569423    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:28.569494    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.569494    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.569494    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.572945    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:28.572945    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Audit-Id: 286aea4e-4179-48a5-85ba-bb43ead6cf53
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.572945    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.572945    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.574432    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:28.575248    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:28.575333    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:28.575333    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:28.575333    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:28.577464    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:28.577464    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:28.577464    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:28 GMT
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Audit-Id: 088e0368-0d4f-4d14-838e-0bde7dfbdf8b
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:28.577464    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:28.578253    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:28.578828    4324 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"False"
	I0520 05:02:29.071183    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.071272    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.071331    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.071331    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.075940    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.075940    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Audit-Id: 8dce480a-dbc7-41ac-90b5-f8dea79978a5
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.075940    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.075940    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.076893    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"407","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0520 05:02:29.077901    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.077901    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.077901    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.077901    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.080892    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.080892    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Audit-Id: 3deb5ccd-0011-4eea-b05e-3e46b6ca46a1
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.080892    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.080892    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.081393    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.569145    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jwj2g
	I0520 05:02:29.569397    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.569397    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.569532    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.573625    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.573625    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Audit-Id: 79c13c8a-88e0-4bd2-a47b-77071114c493
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.573625    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.573625    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.574522    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0520 05:02:29.575800    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.575800    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.575800    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.575921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.579417    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.579417    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.579417    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Audit-Id: f7931507-c579-488b-b2cb-141661840483
	I0520 05:02:29.579417    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.580145    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.580675    4324 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.580675    4324 pod_ready.go:81] duration metric: took 3.0196984s for pod "coredns-7db6d8ff4d-jwj2g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580675    4324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.580921    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-093300
	I0520 05:02:29.580921    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.580921    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.580921    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.583575    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.583575    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Audit-Id: 299468dc-db40-44e8-bab5-8f0829d7830a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.583575    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.583575    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.583575    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-093300","namespace":"kube-system","uid":"294136a3-81cf-4279-ad8c-bd2183d49bb4","resourceVersion":"385","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.248.197:2379","kubernetes.io/config.hash":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.mirror":"2fd2b6b12bdd38e6e3a638eaeae24a9b","kubernetes.io/config.seen":"2024-05-20T12:01:55.034590165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0520 05:02:29.585502    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.585549    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.585628    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.585628    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.587906    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.587906    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Audit-Id: 3d3462b7-9442-4adb-9b2e-bf63cc704c60
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.587906    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.587906    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.587906    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.587906    4324 pod_ready.go:92] pod "etcd-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.587906    4324 pod_ready.go:81] duration metric: took 7.2314ms for pod "etcd-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.587906    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-093300
	I0520 05:02:29.587906    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.587906    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.587906    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.592451    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.592451    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.592451    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.593586    4324 round_trippers.go:580]     Audit-Id: 9aea5b66-caa8-4a2f-93cf-22d5345f582d
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.593611    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.593611    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.593880    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-093300","namespace":"kube-system","uid":"647ed188-e3c5-4c3d-91a7-71109868b8df","resourceVersion":"387","creationTimestamp":"2024-05-20T12:02:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.248.197:8443","kubernetes.io/config.hash":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.mirror":"0d38c167666abde6e81a5d207f054e45","kubernetes.io/config.seen":"2024-05-20T12:01:55.034595464Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0520 05:02:29.594691    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.594691    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.594745    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.594745    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.600498    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:29.600671    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.600671    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Audit-Id: 34ded673-2c07-4389-b3df-ae5b8d4080d1
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.600719    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.600719    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.601079    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.601538    4324 pod_ready.go:92] pod "kube-apiserver-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.601538    4324 pod_ready.go:81] duration metric: took 13.6318ms for pod "kube-apiserver-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.601538    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-093300
	I0520 05:02:29.601538    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.601538    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.601538    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.604158    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.604158    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Audit-Id: 5c195c70-6971-44ed-bb2d-2d80e97eb0ba
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.604158    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.604158    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.605167    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-093300","namespace":"kube-system","uid":"095554ec-48ae-4209-8ecf-183be09ee210","resourceVersion":"384","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.mirror":"e68a4785532be9f344a6eddf03f42624","kubernetes.io/config.seen":"2024-05-20T12:01:55.034596964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0520 05:02:29.605865    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.605865    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.605865    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.605922    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.607761    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:29.607761    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Audit-Id: 0cccc974-e264-4284-b4e6-3405e9711aee
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.607761    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.607761    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.609698    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.610112    4324 pod_ready.go:92] pod "kube-controller-manager-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.610184    4324 pod_ready.go:81] duration metric: took 8.6461ms for pod "kube-controller-manager-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610184    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.610406    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v5b8g
	I0520 05:02:29.610406    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.610406    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.610406    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.613002    4324 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 05:02:29.613002    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.613002    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.613231    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.613231    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.613286    4324 round_trippers.go:580]     Audit-Id: f615dadb-8cc1-4747-860a-38de7a8abcdb
	I0520 05:02:29.613579    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v5b8g","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab5696-b381-48e3-b120-109c905bb649","resourceVersion":"380","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bd3d0f1-ba67-466d-afb9-76a3e6946a31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd3d0f1-ba67-466d-afb9-76a3e6946a31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0520 05:02:29.614648    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.614648    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.614648    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.614648    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.619167    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:29.619167    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Audit-Id: aebe9f63-2178-4e74-ad09-1a2640e43dc2
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.619167    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.619281    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.619281    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.620605    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.621240    4324 pod_ready.go:92] pod "kube-proxy-v5b8g" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.621240    4324 pod_ready.go:81] duration metric: took 11.0561ms for pod "kube-proxy-v5b8g" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.621344    4324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.772817    4324 request.go:629] Waited for 151.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.772817    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-093300
	I0520 05:02:29.773056    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.773113    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.773113    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.779383    4324 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 05:02:29.779383    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Audit-Id: 352e16f2-973e-4738-abbf-8f7369e0f32a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.779383    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.779383    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.779383    4324 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-093300","namespace":"kube-system","uid":"b61c4bc4-d298-4d3e-bcad-8d0da38abe73","resourceVersion":"386","creationTimestamp":"2024-05-20T12:02:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.mirror":"23a914a568186db579f35f8681a4a117","kubernetes.io/config.seen":"2024-05-20T12:02:02.661987458Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0520 05:02:29.978615    4324 request.go:629] Waited for 197.8853ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes/multinode-093300
	I0520 05:02:29.978867    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:29.978867    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:29.978867    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:29.983423    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:29.983423    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:29.983423    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:29 GMT
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Audit-Id: 07e00782-fed4-420f-b2e8-0900bf16b1c6
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:29.983423    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:29.983780    4324 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-20T12:01:59Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0520 05:02:29.984304    4324 pod_ready.go:92] pod "kube-scheduler-multinode-093300" in "kube-system" namespace has status "Ready":"True"
	I0520 05:02:29.984304    4324 pod_ready.go:81] duration metric: took 362.9592ms for pod "kube-scheduler-multinode-093300" in "kube-system" namespace to be "Ready" ...
	I0520 05:02:29.984304    4324 pod_ready.go:38] duration metric: took 3.4349657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 05:02:29.984304    4324 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:02:29.997125    4324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:02:30.024780    4324 command_runner.go:130] > 2091
	I0520 05:02:30.025078    4324 api_server.go:72] duration metric: took 12.8300047s to wait for apiserver process to appear ...
	I0520 05:02:30.025078    4324 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:02:30.025078    4324 api_server.go:253] Checking apiserver healthz at https://172.25.248.197:8443/healthz ...
	I0520 05:02:30.033524    4324 api_server.go:279] https://172.25.248.197:8443/healthz returned 200:
	ok
	I0520 05:02:30.033690    4324 round_trippers.go:463] GET https://172.25.248.197:8443/version
	I0520 05:02:30.033690    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.033690    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.033690    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.035178    4324 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 05:02:30.035178    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.035178    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Length: 263
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Audit-Id: 35ba91d4-5cea-4e2b-b4cb-6477c5de12b9
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.035468    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.035513    4324 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0520 05:02:30.035655    4324 api_server.go:141] control plane version: v1.30.1
	I0520 05:02:30.035679    4324 api_server.go:131] duration metric: took 10.601ms to wait for apiserver health ...
	I0520 05:02:30.035679    4324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 05:02:30.181685    4324 request.go:629] Waited for 145.5783ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181940    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.181989    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.181989    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.181989    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.187775    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.188620    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Audit-Id: 6521551e-f943-4674-a745-0de4d386610a
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.188620    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.188620    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.191575    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.194631    4324 system_pods.go:59] 8 kube-system pods found
	I0520 05:02:30.194743    4324 system_pods.go:61] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.194743    4324 system_pods.go:61] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.194796    4324 system_pods.go:74] duration metric: took 159.0635ms to wait for pod list to return data ...
	I0520 05:02:30.194796    4324 default_sa.go:34] waiting for default service account to be created ...
	I0520 05:02:30.369715    4324 request.go:629] Waited for 174.5767ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/default/serviceaccounts
	I0520 05:02:30.369910    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.369910    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.369910    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.374499    4324 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 05:02:30.374499    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.374499    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Content-Length: 261
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Audit-Id: 32ae28bc-4b6b-4b73-af76-3642ae4dd814
	I0520 05:02:30.375093    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.375153    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.375153    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.375207    4324 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3635b85-e63b-4899-a8fd-0335408468bb","resourceVersion":"344","creationTimestamp":"2024-05-20T12:02:16Z"}}]}
	I0520 05:02:30.375857    4324 default_sa.go:45] found service account: "default"
	I0520 05:02:30.375957    4324 default_sa.go:55] duration metric: took 181.0604ms for default service account to be created ...
	I0520 05:02:30.375957    4324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 05:02:30.571641    4324 request.go:629] Waited for 195.4158ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/namespaces/kube-system/pods
	I0520 05:02:30.571873    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.571873    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.571873    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.577227    4324 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 05:02:30.577227    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Audit-Id: eca86c2b-9ede-445a-9320-723eb32e73ec
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.577227    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.577227    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.577746    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.579133    4324 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jwj2g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0f661b9c-3c82-4b40-aee4-f2cf48115e1d","resourceVersion":"421","creationTimestamp":"2024-05-20T12:02:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"1fbdcf31-dc52-4447-9c06-e809a6cff0f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-20T12:02:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fbdcf31-dc52-4447-9c06-e809a6cff0f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0520 05:02:30.584062    4324 system_pods.go:86] 8 kube-system pods found
	I0520 05:02:30.584062    4324 system_pods.go:89] "coredns-7db6d8ff4d-jwj2g" [0f661b9c-3c82-4b40-aee4-f2cf48115e1d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "etcd-multinode-093300" [294136a3-81cf-4279-ad8c-bd2183d49bb4] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kindnet-5v2g7" [c7edfbec-5144-48d9-a6a1-9bb6214b198d] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-apiserver-multinode-093300" [647ed188-e3c5-4c3d-91a7-71109868b8df] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-controller-manager-multinode-093300" [095554ec-48ae-4209-8ecf-183be09ee210] Running
	I0520 05:02:30.584183    4324 system_pods.go:89] "kube-proxy-v5b8g" [8eab5696-b381-48e3-b120-109c905bb649] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "kube-scheduler-multinode-093300" [b61c4bc4-d298-4d3e-bcad-8d0da38abe73] Running
	I0520 05:02:30.584301    4324 system_pods.go:89] "storage-provisioner" [602cea4d-2fe9-49e2-a7f4-87da56d86428] Running
	I0520 05:02:30.584301    4324 system_pods.go:126] duration metric: took 208.3433ms to wait for k8s-apps to be running ...
	I0520 05:02:30.584402    4324 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 05:02:30.599976    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:30.631281    4324 system_svc.go:56] duration metric: took 46.8793ms WaitForService to wait for kubelet
	I0520 05:02:30.631459    4324 kubeadm.go:576] duration metric: took 13.4363471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:02:30.631459    4324 node_conditions.go:102] verifying NodePressure condition ...
	I0520 05:02:30.777579    4324 request.go:629] Waited for 145.6934ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:463] GET https://172.25.248.197:8443/api/v1/nodes
	I0520 05:02:30.777694    4324 round_trippers.go:469] Request Headers:
	I0520 05:02:30.777758    4324 round_trippers.go:473]     Accept: application/json, */*
	I0520 05:02:30.777758    4324 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0520 05:02:30.781512    4324 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 05:02:30.781512    4324 round_trippers.go:577] Response Headers:
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Audit-Id: 8d96ae8d-f6e9-49e3-b346-07fa08e46bae
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Cache-Control: no-cache, private
	I0520 05:02:30.781512    4324 round_trippers.go:580]     Content-Type: application/json
	I0520 05:02:30.781512    4324 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e20d4cb9-5680-445e-b638-4b2e639eec6f
	I0520 05:02:30.781769    4324 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8598774a-24b2-4f42-99dd-56ba5530be6a
	I0520 05:02:30.781769    4324 round_trippers.go:580]     Date: Mon, 20 May 2024 12:02:30 GMT
	I0520 05:02:30.782003    4324 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-093300","uid":"5525c1c7-c3ff-4985-a8c6-bf7a7a9a3a86","resourceVersion":"404","creationTimestamp":"2024-05-20T12:01:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-093300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b01766f8cca110acded3a48649b81463b982c91","minikube.k8s.io/name":"multinode-093300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_20T05_02_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0520 05:02:30.782205    4324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 05:02:30.782205    4324 node_conditions.go:123] node cpu capacity is 2
	I0520 05:02:30.782205    4324 node_conditions.go:105] duration metric: took 150.7456ms to run NodePressure ...
	I0520 05:02:30.782205    4324 start.go:240] waiting for startup goroutines ...
	I0520 05:02:30.782738    4324 start.go:245] waiting for cluster config update ...
	I0520 05:02:30.782738    4324 start.go:254] writing updated cluster config ...
	I0520 05:02:30.787982    4324 out.go:177] 
	I0520 05:02:30.790978    4324 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.798625    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:02:30.800215    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.806144    4324 out.go:177] * Starting "multinode-093300-m02" worker node in "multinode-093300" cluster
	I0520 05:02:30.808402    4324 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:02:30.808402    4324 cache.go:56] Caching tarball of preloaded images
	I0520 05:02:30.808402    4324 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 05:02:30.808935    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:02:30.809085    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:02:30.813548    4324 start.go:360] acquireMachinesLock for multinode-093300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:02:30.814323    4324 start.go:364] duration metric: took 775.4µs to acquireMachinesLock for "multinode-093300-m02"
	I0520 05:02:30.814600    4324 start.go:93] Provisioning new machine with config: &{Name:multinode-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.248.197 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0520 05:02:30.814600    4324 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0520 05:02:30.819779    4324 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:02:30.820155    4324 start.go:159] libmachine.API.Create for "multinode-093300" (driver="hyperv")
	I0520 05:02:30.820155    4324 client.go:168] LocalClient.Create starting
	I0520 05:02:30.820433    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821124    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821326    4324 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Decoding PEM data...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: Parsing certificate...
	I0520 05:02:30.821608    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:32.866767    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 05:02:34.712000    4324 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:34.712080    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:36.287900    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:40.312021    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:40.314855    4324 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 05:02:40.789899    4324 main.go:141] libmachine: Creating SSH key...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: Creating VM...
	I0520 05:02:40.943165    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 05:02:44.077138    4324 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 05:02:44.077867    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:44.077927    4324 main.go:141] libmachine: Using switch "Default Switch"
	I0520 05:02:44.077927    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:45.938933    4324 main.go:141] libmachine: Creating VHD
	I0520 05:02:45.938933    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E3F31072-AF44-4FB5-B940-9D23E1A9108D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 05:02:49.948880    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing magic tar header
	I0520 05:02:49.948977    4324 main.go:141] libmachine: Writing SSH key tar header
	I0520 05:02:49.958215    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 05:02:53.279850    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:53.280733    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd' -SizeBytes 20000MB
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:02:55.958976    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:55.959390    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-093300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:02:59.813794    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-093300-m02 -DynamicMemoryEnabled $false
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:02.295244    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:02.296026    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-093300-m02 -Count 2
	I0520 05:03:04.631114    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:04.631452    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\boot2docker.iso'
	I0520 05:03:07.372020    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:07.372243    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-093300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\disk.vhd'
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:10.180704    4324 main.go:141] libmachine: Starting VM...
	I0520 05:03:10.180890    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-093300-m02
	I0520 05:03:13.347859    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:13.348532    4324 main.go:141] libmachine: Waiting for host to start...
	I0520 05:03:13.348586    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:15.784852    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:15.785967    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:18.486222    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:18.486512    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:19.497087    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:21.878314    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:21.878623    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:24.559617    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:25.570379    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:27.900110    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:27.900222    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:30.585397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:31.595983    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:33.953429    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:33.953840    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:33.953964    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:03:36.668984    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:37.683774    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:40.038239    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:40.038452    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:40.038533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:42.750552    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:45.026253    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:45.026542    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:45.026649    4324 machine.go:94] provisionDockerMachine start ...
	I0520 05:03:45.026717    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:47.323466    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:49.982521    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:49.982630    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:49.990197    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:49.999843    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:49.999843    4324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:03:50.131880    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:03:50.131981    4324 buildroot.go:166] provisioning hostname "multinode-093300-m02"
	I0520 05:03:50.132126    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:52.417828    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:52.418697    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:52.418850    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:03:55.117654    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:55.126001    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:03:55.126001    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:03:55.126001    4324 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-093300-m02 && echo "multinode-093300-m02" | sudo tee /etc/hostname
	I0520 05:03:55.287810    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-093300-m02
	
	I0520 05:03:55.287810    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:03:57.547392    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:03:57.548372    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:00.236296    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:00.243120    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:00.243684    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:00.243803    4324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-093300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-093300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-093300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:04:00.400796    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:04:00.400796    4324 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:04:00.400796    4324 buildroot.go:174] setting up certificates
	I0520 05:04:00.400796    4324 provision.go:84] configureAuth start
	I0520 05:04:00.400796    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:02.704411    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:02.705380    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:02.705511    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:05.433435    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:05.433780    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:05.433904    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:07.683157    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:10.357903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:10.357903    4324 provision.go:143] copyHostCerts
	I0520 05:04:10.357903    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0520 05:04:10.357903    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:04:10.358552    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:04:10.359113    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:04:10.360289    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0520 05:04:10.360344    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:04:10.360344    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:04:10.360950    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:04:10.361751    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:04:10.361751    4324 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:04:10.361751    4324 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:04:10.364410    4324 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-093300-m02 san=[127.0.0.1 172.25.240.19 localhost minikube multinode-093300-m02]
	I0520 05:04:10.461439    4324 provision.go:177] copyRemoteCerts
	I0520 05:04:10.476897    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:04:10.476897    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:12.761310    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:12.761561    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:12.761627    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:15.461502    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:15.462387    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:15.566177    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0892336s)
	I0520 05:04:15.566229    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0520 05:04:15.566535    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:04:15.619724    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0520 05:04:15.620403    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0520 05:04:15.672890    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0520 05:04:15.673119    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 05:04:15.723725    4324 provision.go:87] duration metric: took 15.3228941s to configureAuth
	I0520 05:04:15.723886    4324 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:04:15.724660    4324 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:04:15.724760    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:18.012889    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:18.013429    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:20.703171    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:20.703451    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:20.709207    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:20.709923    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:20.709923    4324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:04:20.852167    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:04:20.852244    4324 buildroot.go:70] root file system type: tmpfs
	I0520 05:04:20.852374    4324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:04:20.852374    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:23.192710    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:23.193083    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:25.866320    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:25.866596    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:25.875904    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:25.875904    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:25.875904    4324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.248.197"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:04:26.046533    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.248.197
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:04:26.046533    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:28.296309    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:31.011090    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:31.012079    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:31.018140    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:31.018429    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:31.018429    4324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:04:33.214200    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:04:33.214200    4324 machine.go:97] duration metric: took 48.1874407s to provisionDockerMachine
	I0520 05:04:33.214200    4324 client.go:171] duration metric: took 2m2.3937022s to LocalClient.Create
	I0520 05:04:33.214732    4324 start.go:167] duration metric: took 2m2.394352s to libmachine.API.Create "multinode-093300"
	I0520 05:04:33.214778    4324 start.go:293] postStartSetup for "multinode-093300-m02" (driver="hyperv")
	I0520 05:04:33.214778    4324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:04:33.229112    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:04:33.229112    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:35.499582    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:35.500035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:38.244662    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:38.245416    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:38.245674    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:04:38.361513    4324 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1323583s)
	I0520 05:04:38.375196    4324 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:04:38.381690    4324 command_runner.go:130] > NAME=Buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 05:04:38.381690    4324 command_runner.go:130] > ID=buildroot
	I0520 05:04:38.381690    4324 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 05:04:38.381690    4324 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 05:04:38.381690    4324 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:04:38.381690    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:04:38.382234    4324 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:04:38.383159    4324 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:04:38.383228    4324 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> /etc/ssl/certs/41002.pem
	I0520 05:04:38.396253    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:04:38.413368    4324 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:04:38.459483    4324 start.go:296] duration metric: took 5.244693s for postStartSetup
	I0520 05:04:38.462591    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:40.719282    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:40.719441    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:43.416857    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:43.417284    4324 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-093300\config.json ...
	I0520 05:04:43.419860    4324 start.go:128] duration metric: took 2m12.6049549s to createHost
	I0520 05:04:43.420037    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:45.742397    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:48.458236    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:48.463273    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:48.464315    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:48.464315    4324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206688.615020262
	
	I0520 05:04:48.609413    4324 fix.go:216] guest clock: 1716206688.615020262
	I0520 05:04:48.609413    4324 fix.go:229] Guest: 2024-05-20 05:04:48.615020262 -0700 PDT Remote: 2024-05-20 05:04:43.4199466 -0700 PDT m=+360.689669201 (delta=5.195073662s)
	I0520 05:04:48.609413    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:50.862816    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:50.862963    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:50.863035    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:53.564119    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:53.570359    4324 main.go:141] libmachine: Using SSH client type: native
	I0520 05:04:53.571018    4324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.240.19 22 <nil> <nil>}
	I0520 05:04:53.571018    4324 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716206688
	I0520 05:04:53.719287    4324 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:04:48 UTC 2024
	
	I0520 05:04:53.719330    4324 fix.go:236] clock set: Mon May 20 12:04:48 UTC 2024
	 (err=<nil>)
	I0520 05:04:53.719330    4324 start.go:83] releasing machines lock for "multinode-093300-m02", held for 2m22.9046183s
	I0520 05:04:53.719330    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:04:55.986903    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:55.987756    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:04:58.703347    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:04:58.706572    4324 out.go:177] * Found network options:
	I0520 05:04:58.709151    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.711822    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.714051    4324 out.go:177]   - NO_PROXY=172.25.248.197
	W0520 05:04:58.716258    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 05:04:58.718435    4324 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 05:04:58.720792    4324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:04:58.720792    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:04:58.731793    4324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 05:04:58.731793    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-093300-m02 ).state
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.126899    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127053    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:05:01.127292    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:01.127392    4324 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-093300-m02 ).networkadapters[0]).ipaddresses[0]
	I0520 05:05:03.944824    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.945662    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.945662    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stdout =====>] : 172.25.240.19
	
	I0520 05:05:03.968217    4324 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:05:03.968217    4324 sshutil.go:53] new ssh client: &{IP:172.25.240.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-093300-m02\id_rsa Username:docker}
	I0520 05:05:04.098968    4324 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 05:05:04.099032    4324 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3673872s)
	W0520 05:05:04.099235    4324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:05:04.099235    4324 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3782282s)
	I0520 05:05:04.115204    4324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:05:04.146295    4324 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0520 05:05:04.146295    4324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:05:04.146295    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.146295    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:04.190520    4324 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0520 05:05:04.206097    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 05:05:04.242006    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:05:04.262311    4324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:05:04.278039    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:05:04.310970    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.344668    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:05:04.376394    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:05:04.409743    4324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:05:04.441974    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:05:04.477215    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:05:04.516112    4324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:05:04.552125    4324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:05:04.570823    4324 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 05:05:04.584912    4324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:05:04.617872    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:04.823581    4324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:05:04.858259    4324 start.go:494] detecting cgroup driver to use...
	I0520 05:05:04.874430    4324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Unit]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Description=Docker Application Container Engine
	I0520 05:05:04.898122    4324 command_runner.go:130] > Documentation=https://docs.docker.com
	I0520 05:05:04.898122    4324 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0520 05:05:04.898122    4324 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitBurst=3
	I0520 05:05:04.898122    4324 command_runner.go:130] > StartLimitIntervalSec=60
	I0520 05:05:04.898122    4324 command_runner.go:130] > [Service]
	I0520 05:05:04.898122    4324 command_runner.go:130] > Type=notify
	I0520 05:05:04.898122    4324 command_runner.go:130] > Restart=on-failure
	I0520 05:05:04.898122    4324 command_runner.go:130] > Environment=NO_PROXY=172.25.248.197
	I0520 05:05:04.898122    4324 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0520 05:05:04.898122    4324 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0520 05:05:04.898122    4324 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0520 05:05:04.898122    4324 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0520 05:05:04.898122    4324 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0520 05:05:04.898122    4324 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0520 05:05:04.898122    4324 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0520 05:05:04.898122    4324 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0520 05:05:04.898122    4324 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0520 05:05:04.898122    4324 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNOFILE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitNPROC=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > LimitCORE=infinity
	I0520 05:05:04.898122    4324 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0520 05:05:04.898660    4324 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0520 05:05:04.898660    4324 command_runner.go:130] > TasksMax=infinity
	I0520 05:05:04.898660    4324 command_runner.go:130] > TimeoutStartSec=0
	I0520 05:05:04.898715    4324 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0520 05:05:04.898715    4324 command_runner.go:130] > Delegate=yes
	I0520 05:05:04.898715    4324 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0520 05:05:04.898770    4324 command_runner.go:130] > KillMode=process
	I0520 05:05:04.898770    4324 command_runner.go:130] > [Install]
	I0520 05:05:04.898807    4324 command_runner.go:130] > WantedBy=multi-user.target
	I0520 05:05:04.912428    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:04.950550    4324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:05:05.005823    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:05:05.044508    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.085350    4324 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:05:05.159796    4324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:05:05.184338    4324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:05:05.218187    4324 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0520 05:05:05.232266    4324 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:05:05.238954    4324 command_runner.go:130] > /usr/bin/cri-dockerd
	I0520 05:05:05.254357    4324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:05:05.274206    4324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:05:05.320773    4324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:05:05.543311    4324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:05:05.739977    4324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:05:05.740224    4324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:05:05.786839    4324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:05:05.985485    4324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:06:07.138893    4324 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0520 05:06:07.138893    4324 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0520 05:06:07.139533    4324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1538051s)
	I0520 05:06:07.153262    4324 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:06:07.177331    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	I0520 05:06:07.177451    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0520 05:06:07.177588    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0520 05:06:07.177652    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0520 05:06:07.177784    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0520 05:06:07.177848    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.177904    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.177957    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178060    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178137    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0520 05:06:07.178215    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178328    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178382    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178441    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178498    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0520 05:06:07.178633    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0520 05:06:07.178694    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0520 05:06:07.178762    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0520 05:06:07.178827    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	I0520 05:06:07.178885    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0520 05:06:07.178948    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0520 05:06:07.179006    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0520 05:06:07.179058    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0520 05:06:07.179127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179190    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0520 05:06:07.179248    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0520 05:06:07.179310    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179367    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179455    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179539    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179598    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179683    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179763    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179837    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179894    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.179958    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180056    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180130    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180185    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180255    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180307    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180400    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180476    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180540    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0520 05:06:07.180603    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180721    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0520 05:06:07.180783    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180851    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0520 05:06:07.180989    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0520 05:06:07.181050    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0520 05:06:07.181127    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0520 05:06:07.181180    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0520 05:06:07.181225    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	I0520 05:06:07.181289    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0520 05:06:07.181366    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0520 05:06:07.181422    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0520 05:06:07.181483    4324 command_runner.go:130] > May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0520 05:06:07.181548    4324 command_runner.go:130] > May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	I0520 05:06:07.181611    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	I0520 05:06:07.181666    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0520 05:06:07.181726    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	I0520 05:06:07.181781    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	I0520 05:06:07.181842    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	I0520 05:06:07.181896    4324 command_runner.go:130] > May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	I0520 05:06:07.181956    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	I0520 05:06:07.182010    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0520 05:06:07.182106    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0520 05:06:07.182161    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	I0520 05:06:07.182222    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0520 05:06:07.182336    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0520 05:06:07.182391    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0520 05:06:07.182451    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0520 05:06:07.182517    4324 command_runner.go:130] > May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	I0520 05:06:07.182603    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0520 05:06:07.182672    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0520 05:06:07.182784    4324 command_runner.go:130] > May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0520 05:06:07.193257    4324 out.go:177] 
	W0520 05:06:07.196057    4324 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:04:31 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.669816535Z" level=info msg="Starting up"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.670585547Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:04:31 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:31.671663264Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.709198643Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737484679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737617681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737818184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737843185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.737927986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738033588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738365293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738479294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738517295Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738529795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738622197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.738929201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741823846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.741918547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742087750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742376355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742533557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742717760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.742838862Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774526151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774713153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774751954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774779454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774798855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.774967557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775415564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775649968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775695669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775715669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775732569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775750169Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775767570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775793070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775811570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775829571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775846571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775863071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775889172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775906672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775921672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775937072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775951473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775965973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775979373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.775993173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776009173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776025974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776039374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776057674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776072074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776090575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776212477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776228077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776241677Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776294178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776492581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776590282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776614483Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776719084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776760285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.776778285Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777334694Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777492996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777574098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:04:31 multinode-093300-m02 dockerd[673]: time="2024-05-20T12:04:31.777680399Z" level=info msg="containerd successfully booted in 0.069776s"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.751650933Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:04:32 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:32.782469679Z" level=info msg="Loading containers: start."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.074704793Z" level=info msg="Loading containers: done."
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095098279Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.095310382Z" level=info msg="Daemon has completed initialization"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217736097Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:04:33 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:04:33.217860299Z" level=info msg="API listen on [::]:2376"
	May 20 12:04:33 multinode-093300-m02 systemd[1]: Started Docker Application Container Engine.
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.030007076Z" level=info msg="Processing signal 'terminated'"
	May 20 12:05:06 multinode-093300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.031878079Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032273979Z" level=info msg="Daemon shutdown complete"
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032334579Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:05:06 multinode-093300-m02 dockerd[667]: time="2024-05-20T12:05:06.032350479Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:05:07 multinode-093300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:05:07 multinode-093300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:05:07 multinode-093300-m02 dockerd[1019]: time="2024-05-20T12:05:07.116146523Z" level=info msg="Starting up"
	May 20 12:06:07 multinode-093300-m02 dockerd[1019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:06:07 multinode-093300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:06:07 multinode-093300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:06:07.196057    4324 out.go:239] * 
	W0520 05:06:07.198061    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:07.200275    4324 out.go:177] 
	
	
	==> Docker <==
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:19:20 multinode-093300 dockerd[1329]: 2024/05/20 12:19:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:23:56 multinode-093300 dockerd[1329]: 2024/05/20 12:23:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:25:20 multinode-093300 dockerd[1329]: 2024/05/20 12:25:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 12:27:25 multinode-093300 dockerd[1329]: 2024/05/20 12:27:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb9d0befbc6f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   2ffde8c3540f6       busybox-fc5497c4f-rk7lk
	c2f3e10de8772       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   ad5e2e80d0f28       coredns-7db6d8ff4d-jwj2g
	2842c911dbc89       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   fe98a09c9c2b4       storage-provisioner
	14783dea12405       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   bf6cad91522ea       kindnet-5v2g7
	ab52c7f8615e3       747097150317f                                                                                         27 minutes ago      Running             kube-proxy                0                   3906b8cbcfafd       kube-proxy-v5b8g
	8ec8f8bdd4545       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   6841210d98cd7       kube-scheduler-multinode-093300
	477e3df15a9c5       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   dd4d5da9f6aa3       kube-apiserver-multinode-093300
	b9140502b5271       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   7e071ea9ceb25       etcd-multinode-093300
	b87bdfdab24dd       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   443dbaa862ef6       kube-controller-manager-multinode-093300
	
	
	==> coredns [c2f3e10de877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e1af8f27f3b24191b44f318b875fb31e6fccb7bb3ba440c6bb1c4a8079806171859eb9f6b92104d18a13de8e8ad4b6843c1fed2594a05994cff134af1ed12027
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35217 - 31795 "HINFO IN 1094329331258085313.6714271298075950412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042145657s
	[INFO] 10.244.0.3:48640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231801s
	[INFO] 10.244.0.3:43113 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.175241678s
	[INFO] 10.244.0.3:55421 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066162156s
	[INFO] 10.244.0.3:57037 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.307819065s
	[INFO] 10.244.0.3:46291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186401s
	[INFO] 10.244.0.3:42353 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028087509s
	[INFO] 10.244.0.3:39344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194901s
	[INFO] 10.244.0.3:36993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000272401s
	[INFO] 10.244.0.3:48495 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011425645s
	[INFO] 10.244.0.3:49945 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142201s
	[INFO] 10.244.0.3:52438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001537s
	[INFO] 10.244.0.3:51309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.0.3:43788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001446s
	[INFO] 10.244.0.3:48355 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215101s
	[INFO] 10.244.0.3:46628 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000596s
	[INFO] 10.244.0.3:52558 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000566602s
	[INFO] 10.244.0.3:32981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320401s
	[INFO] 10.244.0.3:49440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000250601s
	[INFO] 10.244.0.3:54411 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000254101s
	[INFO] 10.244.0.3:44358 - 5 "PTR IN 1.240.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000269301s
	
	
	==> describe nodes <==
	Name:               multinode-093300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T05_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:01:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:29:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:27:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:27:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:27:33 +0000   Mon, 20 May 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:27:33 +0000   Mon, 20 May 2024 12:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.248.197
	  Hostname:    multinode-093300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7333a5aabd6940aab884192911ea0c22
	  System UUID:                e48c726f-f3ec-7542-93a3-38363a828b7d
	  Boot ID:                    254e22b9-a928-4446-8aa2-37c7bec4f5f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rk7lk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-jwj2g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-093300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-5v2g7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-093300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-multinode-093300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-v5b8g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-093300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node multinode-093300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node multinode-093300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node multinode-093300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node multinode-093300 event: Registered Node multinode-093300 in Controller
	  Normal  NodeReady                27m   kubelet          Node multinode-093300 status is now: NodeReady
	
	
	Name:               multinode-093300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-093300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-093300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T05_22_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 12:22:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-093300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:25:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 12:23:04 +0000   Mon, 20 May 2024 12:26:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.250.168
	  Hostname:    multinode-093300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f1736c8bff04fb69e3451244d381888
	  System UUID:                8c66bb4f-dce2-f44a-be67-ef9ccca5596c
	  Boot ID:                    aa950763-894a-47de-9417-30ddee9d31ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ncmp8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-cjqrv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m17s
	  kube-system                 kube-proxy-8b6tx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  7m17s (x2 over 7m18s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x2 over 7m18s)  kubelet          Node multinode-093300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s (x2 over 7m18s)  kubelet          Node multinode-093300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m15s                  node-controller  Node multinode-093300-m03 event: Registered Node multinode-093300-m03 in Controller
	  Normal  NodeReady                6m54s                  kubelet          Node multinode-093300-m03 status is now: NodeReady
	  Normal  NodeNotReady             3m19s                  node-controller  Node multinode-093300-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.902487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 12:00] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.180947] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[May20 12:01] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.113371] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.561398] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.235465] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.240502] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.829574] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.206964] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.208901] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.307979] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +16.934990] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.105845] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.503141] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +7.453347] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.106064] kauditd_printk_skb: 73 callbacks suppressed
	[May20 12:02] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +0.130829] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.863575] systemd-fstab-generator[2332]: Ignoring "noauto" option for root device
	[  +0.174937] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.274833] kauditd_printk_skb: 51 callbacks suppressed
	[May20 12:06] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b9140502b527] <==
	{"level":"info","ts":"2024-05-20T12:21:57.924994Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1145,"took":"7.832034ms","hash":2574517761,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-20T12:21:57.925085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2574517761,"revision":1145,"compact-revision":904}
	{"level":"info","ts":"2024-05-20T12:22:25.736809Z","caller":"traceutil/trace.go:171","msg":"trace[430372741] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"224.491074ms","start":"2024-05-20T12:22:25.512281Z","end":"2024-05-20T12:22:25.736772Z","steps":["trace[430372741] 'process raft request'  (duration: 224.253073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:25.974125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.558296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:25.974225Z","caller":"traceutil/trace.go:171","msg":"trace[1439624153] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"206.703098ms","start":"2024-05-20T12:22:25.767508Z","end":"2024-05-20T12:22:25.974212Z","steps":["trace[1439624153] 'range keys from in-memory index tree'  (duration: 206.506896ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:26.864539Z","caller":"traceutil/trace.go:171","msg":"trace[1459107816] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"150.383153ms","start":"2024-05-20T12:22:26.714135Z","end":"2024-05-20T12:22:26.864518Z","steps":["trace[1459107816] 'process raft request'  (duration: 150.225653ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:22:43.639207Z","caller":"traceutil/trace.go:171","msg":"trace[1481916495] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"159.576496ms","start":"2024-05-20T12:22:43.479611Z","end":"2024-05-20T12:22:43.639188Z","steps":["trace[1481916495] 'process raft request'  (duration: 159.463096ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.777887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.881564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:22:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1542137351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1462; }","duration":"427.186365ms","start":"2024-05-20T12:22:44.350923Z","end":"2024-05-20T12:22:44.778109Z","steps":["trace[1542137351] 'range keys from in-memory index tree'  (duration: 426.694864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.394969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-093300-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-05-20T12:22:44.778786Z","caller":"traceutil/trace.go:171","msg":"trace[755691261] range","detail":"{range_begin:/registry/minions/multinode-093300-m03; range_end:; response_count:1; response_revision:1462; }","duration":"336.839571ms","start":"2024-05-20T12:22:44.441934Z","end":"2024-05-20T12:22:44.778774Z","steps":["trace[755691261] 'range keys from in-memory index tree'  (duration: 336.219968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:22:44.778829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.441838Z","time spent":"336.975772ms","remote":"127.0.0.1:55370","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3171,"request content":"key:\"/registry/minions/multinode-093300-m03\" "}
	{"level":"warn","ts":"2024-05-20T12:22:44.778433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:22:44.35091Z","time spent":"427.511667ms","remote":"127.0.0.1:55230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-20T12:26:05.359811Z","caller":"traceutil/trace.go:171","msg":"trace[671699277] transaction","detail":"{read_only:false; response_revision:1666; number_of_response:1; }","duration":"210.939268ms","start":"2024-05-20T12:26:05.148857Z","end":"2024-05-20T12:26:05.359796Z","steps":["trace[671699277] 'process raft request'  (duration: 210.462066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:26:05.36072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.492967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:26:05.360999Z","caller":"traceutil/trace.go:171","msg":"trace[2035467320] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:1666; }","duration":"101.814068ms","start":"2024-05-20T12:26:05.259175Z","end":"2024-05-20T12:26:05.360989Z","steps":["trace[2035467320] 'agreement among raft nodes before linearized reading'  (duration: 101.445266ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:05.359487Z","caller":"traceutil/trace.go:171","msg":"trace[1720821549] linearizableReadLoop","detail":"{readStateIndex:1976; appliedIndex:1975; }","duration":"100.19136ms","start":"2024-05-20T12:26:05.259278Z","end":"2024-05-20T12:26:05.359469Z","steps":["trace[1720821549] 'read index received'  (duration: 99.991659ms)","trace[1720821549] 'applied index is now lower than readState.Index'  (duration: 199.101µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:26:07.16433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.913412ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5005345909760475429 > lease_revoke:<id:45768f95e13a40e5>","response":"size:27"}
	{"level":"info","ts":"2024-05-20T12:26:10.873662Z","caller":"traceutil/trace.go:171","msg":"trace[962951199] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"194.958196ms","start":"2024-05-20T12:26:10.678684Z","end":"2024-05-20T12:26:10.873642Z","steps":["trace[962951199] 'process raft request'  (duration: 194.799695ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:10.875059Z","caller":"traceutil/trace.go:171","msg":"trace[118346474] linearizableReadLoop","detail":"{readStateIndex:1980; appliedIndex:1980; }","duration":"117.296039ms","start":"2024-05-20T12:26:10.75769Z","end":"2024-05-20T12:26:10.874986Z","steps":["trace[118346474] 'read index received'  (duration: 117.291539ms)","trace[118346474] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:26:10.87526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.58774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:26:10.877017Z","caller":"traceutil/trace.go:171","msg":"trace[447856641] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1669; }","duration":"119.378848ms","start":"2024-05-20T12:26:10.757626Z","end":"2024-05-20T12:26:10.877005Z","steps":["trace[447856641] 'agreement among raft nodes before linearized reading'  (duration: 117.51034ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:26:57.941625Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1386}
	{"level":"info","ts":"2024-05-20T12:26:57.950449Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1386,"took":"7.872136ms","hash":2122880162,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1708032,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-05-20T12:26:57.950557Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2122880162,"revision":1386,"compact-revision":1145}
	
	
	==> kernel <==
	 12:29:50 up 30 min,  0 users,  load average: 0.20, 0.29, 0.24
	Linux multinode-093300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14783dea1240] <==
	I0520 12:28:47.339705       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:28:57.347014       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:28:57.347057       1 main.go:227] handling current node
	I0520 12:28:57.347068       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:28:57.347094       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:29:07.361759       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:29:07.361832       1 main.go:227] handling current node
	I0520 12:29:07.361844       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:29:07.361851       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:29:17.377308       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:29:17.377351       1 main.go:227] handling current node
	I0520 12:29:17.377380       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:29:17.377387       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:29:27.388141       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:29:27.388259       1 main.go:227] handling current node
	I0520 12:29:27.388291       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:29:27.388299       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:29:37.394049       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:29:37.394167       1 main.go:227] handling current node
	I0520 12:29:37.394181       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:29:37.394196       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	I0520 12:29:47.400453       1 main.go:223] Handling node with IPs: map[172.25.248.197:{}]
	I0520 12:29:47.400638       1 main.go:227] handling current node
	I0520 12:29:47.400715       1 main.go:223] Handling node with IPs: map[172.25.250.168:{}]
	I0520 12:29:47.400762       1 main.go:250] Node multinode-093300-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [477e3df15a9c] <==
	I0520 12:02:00.429374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 12:02:00.438155       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 12:02:00.438321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 12:02:01.614673       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 12:02:01.704090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 12:02:01.813012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 12:02:01.825606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.248.197]
	I0520 12:02:01.827042       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 12:02:01.844034       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 12:02:02.479990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0520 12:02:02.502011       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0520 12:02:02.502042       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0520 12:02:02.502238       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 178.997µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0520 12:02:02.503185       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0520 12:02:02.504244       1 timeout.go:142] post-timeout activity - time-elapsed: 2.303061ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0520 12:02:02.703182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 12:02:02.759048       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 12:02:02.829043       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 12:02:16.484547       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 12:02:16.557021       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0520 12:18:09.877717       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62575: use of closed network connection
	E0520 12:18:10.700260       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62580: use of closed network connection
	E0520 12:18:11.474273       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62585: use of closed network connection
	E0520 12:18:48.326152       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62602: use of closed network connection
	E0520 12:18:58.782603       1 conn.go:339] Error on socket receive: read tcp 172.25.248.197:8443->172.25.240.1:62604: use of closed network connection
	
	
	==> kube-controller-manager [b87bdfdab24d] <==
	I0520 12:02:16.953208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.964907ms"
	I0520 12:02:16.953455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.9µs"
	I0520 12:02:18.244134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.795932ms"
	I0520 12:02:18.288228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.02796ms"
	I0520 12:02:18.289203       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="586.098µs"
	I0520 12:02:26.523254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.1µs"
	I0520 12:02:26.549649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.5µs"
	I0520 12:02:29.143189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.788415ms"
	I0520 12:02:29.144170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.8µs"
	I0520 12:02:30.733989       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0520 12:06:44.544627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.513035ms"
	I0520 12:06:44.556530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.014067ms"
	I0520 12:06:44.557710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.9µs"
	I0520 12:06:47.616256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.299406ms"
	I0520 12:06:47.616355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.5µs"
	I0520 12:22:33.084385       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-093300-m03\" does not exist"
	I0520 12:22:33.104885       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-093300-m03" podCIDRs=["10.244.1.0/24"]
	I0520 12:22:35.968109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-093300-m03"
	I0520 12:22:56.341095       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-093300-m03"
	I0520 12:22:56.368042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.3µs"
	I0520 12:22:56.389258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.3µs"
	I0520 12:22:59.571331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.348641ms"
	I0520 12:22:59.572056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.6µs"
	I0520 12:26:31.159518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.342304ms"
	I0520 12:26:31.162980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.801µs"
	
	
	==> kube-proxy [ab52c7f8615e] <==
	I0520 12:02:18.607841       1 server_linux.go:69] "Using iptables proxy"
	I0520 12:02:18.631094       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.25.248.197"]
	I0520 12:02:18.691457       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 12:02:18.691559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 12:02:18.691600       1 server_linux.go:165] "Using iptables Proxier"
	I0520 12:02:18.697156       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 12:02:18.697595       1 server.go:872] "Version info" version="v1.30.1"
	I0520 12:02:18.697684       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 12:02:18.699853       1 config.go:192] "Starting service config controller"
	I0520 12:02:18.700176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 12:02:18.700549       1 config.go:101] "Starting endpoint slice config controller"
	I0520 12:02:18.700785       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 12:02:18.701388       1 config.go:319] "Starting node config controller"
	I0520 12:02:18.701604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 12:02:18.800714       1 shared_informer.go:320] Caches are synced for service config
	I0520 12:02:18.801393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 12:02:18.802080       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8ec8f8bdd454] <==
	W0520 12:02:00.507060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 12:02:00.507354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 12:02:00.526890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.527118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.589698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 12:02:00.591554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 12:02:00.614454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.615286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.650032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 12:02:00.650308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 12:02:00.710782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.711313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.714192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 12:02:00.714596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 12:02:00.754594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 12:02:00.754629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 12:02:00.843231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 12:02:00.843674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 12:02:00.928690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 12:02:00.929186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 12:02:00.973494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 12:02:00.973906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 12:02:01.111995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 12:02:01.112049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 12:02:02.288801       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:25:02 multinode-093300 kubelet[2141]: E0520 12:25:02.778935    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:25:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:25:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:25:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:25:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:26:02 multinode-093300 kubelet[2141]: E0520 12:26:02.779246    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:26:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:26:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:26:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:26:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:27:02 multinode-093300 kubelet[2141]: E0520 12:27:02.791532    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:27:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:27:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:27:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:27:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:28:02 multinode-093300 kubelet[2141]: E0520 12:28:02.778808    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:28:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:28:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:28:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:28:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:29:02 multinode-093300 kubelet[2141]: E0520 12:29:02.780184    2141 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:29:02 multinode-093300 kubelet[2141]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:29:02 multinode-093300 kubelet[2141]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:29:02 multinode-093300 kubelet[2141]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:29:02 multinode-093300 kubelet[2141]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:29:42.153550   15344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-093300 -n multinode-093300: (12.9088287s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-093300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (145.40s)

                                                
                                    
x
+
TestPreload (585.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-646700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0520 05:33:04.564544    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:35:25.067567    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-646700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m32.2666998s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-646700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-646700 image pull gcr.io/k8s-minikube/busybox: (8.6329701s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-646700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-646700: (40.892445s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-646700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0520 05:38:04.569241    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:40:08.303423    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 05:40:25.065457    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-646700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: exit status 90 (3m9.9107178s)

                                                
                                                
-- stdout --
	* [test-preload-646700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the hyperv driver based on existing profile
	* Starting "test-preload-646700" primary control-plane node in "test-preload-646700" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperv VM for "test-preload-646700" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:38:02.399037    2616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 05:38:02.401029    2616 out.go:291] Setting OutFile to fd 1772 ...
	I0520 05:38:02.402099    2616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:38:02.402099    2616 out.go:304] Setting ErrFile to fd 1888...
	I0520 05:38:02.402099    2616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:38:02.429720    2616 out.go:298] Setting JSON to false
	I0520 05:38:02.434140    2616 start.go:129] hostinfo: {"hostname":"minikube1","uptime":8678,"bootTime":1716200003,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 05:38:02.434140    2616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:38:02.615532    2616 out.go:177] * [test-preload-646700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 05:38:02.651851    2616 notify.go:220] Checking for updates...
	I0520 05:38:02.724154    2616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 05:38:02.859947    2616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:38:03.060163    2616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 05:38:03.364087    2616 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 05:38:03.579483    2616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:38:03.712322    2616 config.go:182] Loaded profile config "test-preload-646700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0520 05:38:03.852695    2616 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 05:38:03.930259    2616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:38:09.613094    2616 out.go:177] * Using the hyperv driver based on existing profile
	I0520 05:38:09.659095    2616 start.go:297] selected driver: hyperv
	I0520 05:38:09.659095    2616 start.go:901] validating driver "hyperv" against &{Name:test-preload-646700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-646700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.254.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:38:09.659095    2616 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:38:09.710862    2616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:38:09.710862    2616 cni.go:84] Creating CNI manager for ""
	I0520 05:38:09.710862    2616 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:38:09.710862    2616 start.go:340] cluster config:
	{Name:test-preload-646700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-646700 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.254.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:38:09.711571    2616 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:38:09.812337    2616 out.go:177] * Starting "test-preload-646700" primary control-plane node in "test-preload-646700" cluster
	I0520 05:38:09.862351    2616 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 05:38:09.901612    2616 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0520 05:38:09.902554    2616 cache.go:56] Caching tarball of preloaded images
	I0520 05:38:09.902819    2616 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 05:38:10.052262    2616 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0520 05:38:10.165711    2616 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0520 05:38:10.237463    2616 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0520 05:38:14.957557    2616 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0520 05:38:14.958444    2616 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0520 05:38:16.093799    2616 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0520 05:38:16.094931    2616 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\test-preload-646700\config.json ...
	I0520 05:38:16.096736    2616 start.go:360] acquireMachinesLock for test-preload-646700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:38:16.097677    2616 start.go:364] duration metric: took 941.4µs to acquireMachinesLock for "test-preload-646700"
	I0520 05:38:16.097809    2616 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:38:16.097809    2616 fix.go:54] fixHost starting: 
	I0520 05:38:16.098664    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:18.926623    2616 main.go:141] libmachine: [stdout =====>] : Off
	
	I0520 05:38:18.927375    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:18.927375    2616 fix.go:112] recreateIfNeeded on test-preload-646700: state=Stopped err=<nil>
	W0520 05:38:18.927375    2616 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:38:18.960106    2616 out.go:177] * Restarting existing hyperv VM for "test-preload-646700" ...
	I0520 05:38:18.964861    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-646700
	I0520 05:38:22.241303    2616 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:38:22.242063    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:22.242063    2616 main.go:141] libmachine: Waiting for host to start...
	I0520 05:38:22.242063    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:24.653574    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:24.653731    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:24.653924    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:27.222247    2616 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:38:27.222247    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:28.230741    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:30.557592    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:30.558343    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:30.558410    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:33.166645    2616 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:38:33.167677    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:34.180821    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:36.467994    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:36.468725    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:36.468725    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:39.136305    2616 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:38:39.136939    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:40.151128    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:42.463056    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:42.463830    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:42.463925    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:45.147692    2616 main.go:141] libmachine: [stdout =====>] : 
	I0520 05:38:45.147692    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:46.159919    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:48.462983    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:48.462983    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:48.462983    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:51.157496    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:38:51.157496    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:51.161142    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:53.411790    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:53.411790    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:53.411902    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:38:56.121312    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:38:56.121312    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:56.121866    2616 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\test-preload-646700\config.json ...
	I0520 05:38:56.124523    2616 machine.go:94] provisionDockerMachine start ...
	I0520 05:38:56.124523    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:38:58.377868    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:38:58.378079    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:38:58.378176    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:01.079303    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:01.079303    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:01.085702    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:01.086058    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:01.086058    2616 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 05:39:01.227282    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 05:39:01.227282    2616 buildroot.go:166] provisioning hostname "test-preload-646700"
	I0520 05:39:01.227282    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:03.491334    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:03.491334    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:03.492394    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:06.129386    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:06.129386    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:06.136782    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:06.137724    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:06.137724    2616 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-646700 && echo "test-preload-646700" | sudo tee /etc/hostname
	I0520 05:39:06.304738    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-646700
	
	I0520 05:39:06.304893    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:08.584467    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:08.585465    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:08.585511    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:11.230620    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:11.230871    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:11.237946    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:11.238654    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:11.238654    2616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-646700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-646700/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-646700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 05:39:11.395598    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 05:39:11.395692    2616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 05:39:11.395802    2616 buildroot.go:174] setting up certificates
	I0520 05:39:11.395802    2616 provision.go:84] configureAuth start
	I0520 05:39:11.395886    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:13.623419    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:13.623419    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:13.623419    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:16.292889    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:16.292960    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:16.292960    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:18.533634    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:18.534517    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:18.534726    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:21.172689    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:21.172689    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:21.172689    2616 provision.go:143] copyHostCerts
	I0520 05:39:21.173380    2616 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 05:39:21.173380    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 05:39:21.173380    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 05:39:21.175160    2616 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 05:39:21.175222    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 05:39:21.175222    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 05:39:21.176703    2616 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 05:39:21.176703    2616 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 05:39:21.177283    2616 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 05:39:21.178517    2616 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-646700 san=[127.0.0.1 172.25.251.124 localhost minikube test-preload-646700]
	I0520 05:39:21.508078    2616 provision.go:177] copyRemoteCerts
	I0520 05:39:21.523087    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 05:39:21.523164    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:23.759272    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:23.759272    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:23.759385    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:26.437486    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:26.437792    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:26.438065    2616 sshutil.go:53] new ssh client: &{IP:172.25.251.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\test-preload-646700\id_rsa Username:docker}
	I0520 05:39:26.542216    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0189115s)
	I0520 05:39:26.542818    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 05:39:26.590302    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 05:39:26.639645    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 05:39:26.686256    2616 provision.go:87] duration metric: took 15.2903552s to configureAuth
	I0520 05:39:26.686311    2616 buildroot.go:189] setting minikube options for container-runtime
	I0520 05:39:26.687066    2616 config.go:182] Loaded profile config "test-preload-646700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0520 05:39:26.687066    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:28.875463    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:28.876538    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:28.876538    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:31.527298    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:31.527298    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:31.534644    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:31.534806    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:31.534806    2616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 05:39:31.675781    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 05:39:31.675888    2616 buildroot.go:70] root file system type: tmpfs
	I0520 05:39:31.676052    2616 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 05:39:31.676171    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:33.847999    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:33.847999    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:33.849098    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:36.487078    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:36.488034    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:36.494285    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:36.494285    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:36.494834    2616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 05:39:36.663009    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 05:39:36.663098    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:38.832841    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:38.832841    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:38.832933    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:41.477134    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:41.477134    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:41.483856    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:41.483856    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:41.484371    2616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 05:39:43.889378    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 05:39:43.889378    2616 machine.go:97] duration metric: took 47.7647403s to provisionDockerMachine
	I0520 05:39:43.889917    2616 start.go:293] postStartSetup for "test-preload-646700" (driver="hyperv")
	I0520 05:39:43.889917    2616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 05:39:43.903775    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 05:39:43.903775    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:46.088999    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:46.088999    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:46.088999    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:48.686022    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:48.686022    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:48.687314    2616 sshutil.go:53] new ssh client: &{IP:172.25.251.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\test-preload-646700\id_rsa Username:docker}
	I0520 05:39:48.796125    2616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8923382s)
	I0520 05:39:48.810797    2616 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 05:39:48.817207    2616 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 05:39:48.817207    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 05:39:48.817740    2616 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 05:39:48.819066    2616 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 05:39:48.832021    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 05:39:48.850334    2616 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 05:39:48.894521    2616 start.go:296] duration metric: took 5.0045923s for postStartSetup
	I0520 05:39:48.894707    2616 fix.go:56] duration metric: took 1m32.7966735s for fixHost
	I0520 05:39:48.894792    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:51.066778    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:51.067518    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:51.067518    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:53.718789    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:53.718789    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:53.726127    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:53.726364    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:53.726364    2616 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 05:39:53.862454    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716208793.859091821
	
	I0520 05:39:53.862510    2616 fix.go:216] guest clock: 1716208793.859091821
	I0520 05:39:53.862510    2616 fix.go:229] Guest: 2024-05-20 05:39:53.859091821 -0700 PDT Remote: 2024-05-20 05:39:48.8947076 -0700 PDT m=+106.580007301 (delta=4.964384221s)
	I0520 05:39:53.862679    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:39:56.058957    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:39:56.058957    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:56.058957    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:39:58.752013    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:39:58.752710    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:39:58.759494    2616 main.go:141] libmachine: Using SSH client type: native
	I0520 05:39:58.760336    2616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.251.124 22 <nil> <nil>}
	I0520 05:39:58.760336    2616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716208793
	I0520 05:39:58.904342    2616 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 12:39:53 UTC 2024
	
	I0520 05:39:58.905340    2616 fix.go:236] clock set: Mon May 20 12:39:53 UTC 2024
	 (err=<nil>)
	I0520 05:39:58.905340    2616 start.go:83] releasing machines lock for "test-preload-646700", held for 1m42.8073541s
	I0520 05:39:58.905615    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:40:01.157790    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:40:01.157876    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:01.158080    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:40:03.805020    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:40:03.805020    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:03.810197    2616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 05:40:03.810326    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:40:03.822173    2616 ssh_runner.go:195] Run: cat /version.json
	I0520 05:40:03.822173    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-646700 ).state
	I0520 05:40:06.134734    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:40:06.134734    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:06.134734    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:40:06.136330    2616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 05:40:06.136383    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:06.136383    2616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-646700 ).networkadapters[0]).ipaddresses[0]
	I0520 05:40:08.902924    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:40:08.903399    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:08.903399    2616 sshutil.go:53] new ssh client: &{IP:172.25.251.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\test-preload-646700\id_rsa Username:docker}
	I0520 05:40:08.930739    2616 main.go:141] libmachine: [stdout =====>] : 172.25.251.124
	
	I0520 05:40:08.930809    2616 main.go:141] libmachine: [stderr =====>] : 
	I0520 05:40:08.930861    2616 sshutil.go:53] new ssh client: &{IP:172.25.251.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\test-preload-646700\id_rsa Username:docker}
	I0520 05:40:09.007921    2616 ssh_runner.go:235] Completed: cat /version.json: (5.1857363s)
	W0520 05:40:09.007921    2616 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 05:40:09.024773    2616 ssh_runner.go:195] Run: systemctl --version
	I0520 05:40:09.085827    2616 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2756173s)
	I0520 05:40:09.099377    2616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 05:40:09.107688    2616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 05:40:09.120900    2616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 05:40:09.148343    2616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 05:40:09.148491    2616 start.go:494] detecting cgroup driver to use...
	I0520 05:40:09.148873    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:40:09.198600    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 05:40:09.232400    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 05:40:09.253650    2616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 05:40:09.268381    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 05:40:09.301737    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:40:09.336634    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 05:40:09.370190    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 05:40:09.402816    2616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 05:40:09.434773    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 05:40:09.468904    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 05:40:09.509156    2616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 05:40:09.544748    2616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 05:40:09.578830    2616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 05:40:09.611865    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:40:09.817472    2616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 05:40:09.849833    2616 start.go:494] detecting cgroup driver to use...
	I0520 05:40:09.864408    2616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 05:40:09.901256    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:40:09.941842    2616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 05:40:09.986728    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 05:40:10.023913    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:40:10.061281    2616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 05:40:10.130916    2616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 05:40:10.157371    2616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 05:40:10.205397    2616 ssh_runner.go:195] Run: which cri-dockerd
	I0520 05:40:10.225067    2616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 05:40:10.243605    2616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 05:40:10.290193    2616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 05:40:10.491200    2616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 05:40:10.697495    2616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 05:40:10.697781    2616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 05:40:10.743872    2616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:40:10.946720    2616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 05:41:12.098732    2616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1517746s)
	I0520 05:41:12.111562    2616 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 05:41:12.145869    2616 out.go:177] 
	W0520 05:41:12.149495    2616 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:39:42 test-preload-646700 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.091328384Z" level=info msg="Starting up"
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.093147576Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.096541062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.135455803Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161708696Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161766196Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161874796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161897095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162509593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162597093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162985791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163081691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163102491Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163113390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163593089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.164258686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170152662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170182462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170295361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170405261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171144458Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171332757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171467256Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.188972385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189039585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189061785Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189077684Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189105484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189196084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189448583Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189532183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189645082Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189680482Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189696082Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189711082Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189770382Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189838581Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189860681Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190123880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190172480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190197180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190224880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190245780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190568078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190673578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190854477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190966077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191087476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191157276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191220876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191285475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191345975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191419275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191506675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191597174Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191897373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192006873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192073072Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192186672Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193529466Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193569366Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193585766Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193687366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193813865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193832865Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194126364Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194325263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194377063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194418563Z" level=info msg="containerd successfully booted in 0.062289s"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.156611656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.288583051Z" level=info msg="Loading containers: start."
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.718263173Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.801711475Z" level=info msg="Loading containers: done."
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.827044951Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.827971335Z" level=info msg="Daemon has completed initialization"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.884101095Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.884343391Z" level=info msg="API listen on [::]:2376"
	May 20 12:39:43 test-preload-646700 systemd[1]: Started Docker Application Container Engine.
	May 20 12:40:10 test-preload-646700 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.968363933Z" level=info msg="Processing signal 'terminated'"
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.971273496Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972133085Z" level=info msg="Daemon shutdown complete"
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972238883Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972252483Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:40:11 test-preload-646700 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:40:11 test-preload-646700 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:40:11 test-preload-646700 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:40:12 test-preload-646700 dockerd[1047]: time="2024-05-20T12:40:12.053268515Z" level=info msg="Starting up"
	May 20 12:41:12 test-preload-646700 dockerd[1047]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:41:12 test-preload-646700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:41:12 test-preload-646700 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:41:12 test-preload-646700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:39:42 test-preload-646700 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.091328384Z" level=info msg="Starting up"
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.093147576Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:39:42 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:42.096541062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.135455803Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161708696Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161766196Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161874796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.161897095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162509593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162597093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.162985791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163081691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163102491Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163113390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.163593089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.164258686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170152662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170182462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170295361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.170405261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171144458Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171332757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.171467256Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.188972385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189039585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189061785Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189077684Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189105484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189196084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189448583Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189532183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189645082Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189680482Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189696082Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189711082Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189770382Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189838581Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.189860681Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190123880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190172480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190197180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190224880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190245780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190568078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190673578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190854477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.190966077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191087476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191157276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191220876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191285475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191345975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191419275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191506675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191597174Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.191897373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192006873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192073072Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.192186672Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193529466Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193569366Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193585766Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193687366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193813865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.193832865Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194126364Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194325263Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194377063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:39:42 test-preload-646700 dockerd[660]: time="2024-05-20T12:39:42.194418563Z" level=info msg="containerd successfully booted in 0.062289s"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.156611656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.288583051Z" level=info msg="Loading containers: start."
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.718263173Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.801711475Z" level=info msg="Loading containers: done."
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.827044951Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.827971335Z" level=info msg="Daemon has completed initialization"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.884101095Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:39:43 test-preload-646700 dockerd[654]: time="2024-05-20T12:39:43.884343391Z" level=info msg="API listen on [::]:2376"
	May 20 12:39:43 test-preload-646700 systemd[1]: Started Docker Application Container Engine.
	May 20 12:40:10 test-preload-646700 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.968363933Z" level=info msg="Processing signal 'terminated'"
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.971273496Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972133085Z" level=info msg="Daemon shutdown complete"
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972238883Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:40:10 test-preload-646700 dockerd[654]: time="2024-05-20T12:40:10.972252483Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:40:11 test-preload-646700 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:40:11 test-preload-646700 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:40:11 test-preload-646700 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:40:12 test-preload-646700 dockerd[1047]: time="2024-05-20T12:40:12.053268515Z" level=info msg="Starting up"
	May 20 12:41:12 test-preload-646700 dockerd[1047]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:41:12 test-preload-646700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:41:12 test-preload-646700 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:41:12 test-preload-646700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 05:41:12.150185    2616 out.go:239] * 
	* 
	W0520 05:41:12.151655    2616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:41:12.154683    2616 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-windows-amd64.exe start -p test-preload-646700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-05-20 05:41:12.3658842 -0700 PDT m=+8428.038999601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-646700 -n test-preload-646700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-646700 -n test-preload-646700: exit status 6 (12.3770118s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:41:12.504123    1728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0520 05:41:24.678807    1728 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-646700" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-646700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-646700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-646700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-646700: (1m1.7000352s)
--- FAIL: TestPreload (585.94s)

                                                
                                    
x
+
TestScheduledStopWindows (295.28s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-096800 --memory=2048 --driver=hyperv
E0520 05:43:04.577006    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:45:25.060753    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-096800 --memory=2048 --driver=hyperv: exit status 90 (3m40.5397756s)

                                                
                                                
-- stdout --
	* [scheduled-stop-096800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "scheduled-stop-096800" primary control-plane node in "scheduled-stop-096800" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:42:26.544536   10812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:44:31 scheduled-stop-096800 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.888357025Z" level=info msg="Starting up"
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.889275907Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.890779477Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.937286567Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.965315019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966111703Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966452596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966555294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966701791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966745691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967071284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967171682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967197682Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967210281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967316379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967683072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971235503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971348201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971517997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971635195Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971830891Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.972027187Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.972121585Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997379091Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997620786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997765283Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997878581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997960480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998086777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998423971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998631667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998788163Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998818163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998835663Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998852462Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998867462Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998978460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999005359Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999022659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999037859Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999052758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999079758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999097557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999112657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999128757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999142557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999158056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999172156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999188256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999203855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999222855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999238655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999272554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999302153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999328053Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999358552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999399552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999417151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999474050Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999495150Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999507349Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999519849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999618447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999709045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999736745Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000041239Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000194836Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000271234Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000293334Z" level=info msg="containerd successfully booted in 0.066294s"
	May 20 12:44:32 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:32.962793273Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:44:32 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:32.992494928Z" level=info msg="Loading containers: start."
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.265553924Z" level=info msg="Loading containers: done."
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.291733256Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.291874258Z" level=info msg="Daemon has completed initialization"
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.404025081Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:44:33 scheduled-stop-096800 systemd[1]: Started Docker Application Container Engine.
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.404151083Z" level=info msg="API listen on [::]:2376"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.787853094Z" level=info msg="Processing signal 'terminated'"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.789883655Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790437672Z" level=info msg="Daemon shutdown complete"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790489273Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790491173Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:45:05 scheduled-stop-096800 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:45:06 scheduled-stop-096800 dockerd[1025]: time="2024-05-20T12:45:06.865560206Z" level=info msg="Starting up"
	May 20 12:46:06 scheduled-stop-096800 dockerd[1025]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 90

                                                
                                                
-- stdout --
	* [scheduled-stop-096800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "scheduled-stop-096800" primary control-plane node in "scheduled-stop-096800" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:42:26.544536   10812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 12:44:31 scheduled-stop-096800 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.888357025Z" level=info msg="Starting up"
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.889275907Z" level=info msg="containerd not running, starting managed containerd"
	May 20 12:44:31 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:31.890779477Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.937286567Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.965315019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966111703Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966452596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966555294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966701791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.966745691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967071284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967171682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967197682Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967210281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967316379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.967683072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971235503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971348201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971517997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971635195Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.971830891Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.972027187Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.972121585Z" level=info msg="metadata content store policy set" policy=shared
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997379091Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997620786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997765283Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997878581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.997960480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998086777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998423971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998631667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998788163Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998818163Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998835663Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998852462Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998867462Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.998978460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999005359Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999022659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999037859Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999052758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999079758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999097557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999112657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999128757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999142557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999158056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999172156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999188256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999203855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999222855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999238655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999272554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999302153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999328053Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999358552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999399552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999417151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999474050Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999495150Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999507349Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999519849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999618447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999709045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 12:44:31 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:31.999736745Z" level=info msg="NRI interface is disabled by configuration."
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000041239Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000194836Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000271234Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 12:44:32 scheduled-stop-096800 dockerd[670]: time="2024-05-20T12:44:32.000293334Z" level=info msg="containerd successfully booted in 0.066294s"
	May 20 12:44:32 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:32.962793273Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 12:44:32 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:32.992494928Z" level=info msg="Loading containers: start."
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.265553924Z" level=info msg="Loading containers: done."
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.291733256Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.291874258Z" level=info msg="Daemon has completed initialization"
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.404025081Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 12:44:33 scheduled-stop-096800 systemd[1]: Started Docker Application Container Engine.
	May 20 12:44:33 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:44:33.404151083Z" level=info msg="API listen on [::]:2376"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.787853094Z" level=info msg="Processing signal 'terminated'"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.789883655Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790437672Z" level=info msg="Daemon shutdown complete"
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790489273Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 12:45:05 scheduled-stop-096800 dockerd[664]: time="2024-05-20T12:45:05.790491173Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 12:45:05 scheduled-stop-096800 systemd[1]: Stopping Docker Application Container Engine...
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: docker.service: Deactivated successfully.
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: Stopped Docker Application Container Engine.
	May 20 12:45:06 scheduled-stop-096800 systemd[1]: Starting Docker Application Container Engine...
	May 20 12:45:06 scheduled-stop-096800 dockerd[1025]: time="2024-05-20T12:45:06.865560206Z" level=info msg="Starting up"
	May 20 12:46:06 scheduled-stop-096800 dockerd[1025]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 12:46:06 scheduled-stop-096800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopWindows FAILED at 2024-05-20 05:46:06.9909563 -0700 PDT m=+8722.663344701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-096800 -n scheduled-stop-096800
E0520 05:46:07.827533    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-096800 -n scheduled-stop-096800: exit status 6 (12.6784505s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:46:07.111890    3468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0520 05:46:19.602717    3468 status.go:417] kubeconfig endpoint: get endpoint: "scheduled-stop-096800" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "scheduled-stop-096800" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "scheduled-stop-096800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-096800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-096800: (1m2.0535439s)
--- FAIL: TestScheduledStopWindows (295.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1087.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3118757125.exe start -p running-upgrade-649400 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3118757125.exe start -p running-upgrade-649400 --memory=2200 --vm-driver=hyperv: (8m14.7832436s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-649400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-649400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 93 (9m46.0293281s)

                                                
                                                
-- stdout --
	* [running-upgrade-649400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the hyperv driver based on existing profile
	* Starting "running-upgrade-649400" primary control-plane node in "running-upgrade-649400" cluster
	* Updating the running hyperv "running-upgrade-649400" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:00:38.188340   10256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 06:00:38.190108   10256 out.go:291] Setting OutFile to fd 1344 ...
	I0520 06:00:38.190888   10256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:00:38.190888   10256 out.go:304] Setting ErrFile to fd 1476...
	I0520 06:00:38.190888   10256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:00:38.220193   10256 out.go:298] Setting JSON to false
	I0520 06:00:38.224194   10256 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10034,"bootTime":1716200003,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 06:00:38.224194   10256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 06:00:38.228157   10256 out.go:177] * [running-upgrade-649400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 06:00:38.231883   10256 notify.go:220] Checking for updates...
	I0520 06:00:38.235214   10256 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 06:00:38.239499   10256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 06:00:38.245629   10256 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 06:00:38.247642   10256 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 06:00:38.250604   10256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 06:00:38.254451   10256 config.go:182] Loaded profile config "running-upgrade-649400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 06:00:38.258505   10256 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 06:00:38.260617   10256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 06:00:44.076502   10256 out.go:177] * Using the hyperv driver based on existing profile
	I0520 06:00:44.079798   10256 start.go:297] selected driver: hyperv
	I0520 06:00:44.079798   10256 start.go:901] validating driver "hyperv" against &{Name:running-upgrade-649400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade
-649400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.241.47 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 06:00:44.080495   10256 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 06:00:44.134179   10256 cni.go:84] Creating CNI manager for ""
	I0520 06:00:44.134179   10256 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 06:00:44.134179   10256 start.go:340] cluster config:
	{Name:running-upgrade-649400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-649400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.241.47 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 06:00:44.134179   10256 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 06:00:44.140480   10256 out.go:177] * Starting "running-upgrade-649400" primary control-plane node in "running-upgrade-649400" cluster
	I0520 06:00:44.142443   10256 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 06:00:44.143085   10256 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I0520 06:00:44.143140   10256 cache.go:56] Caching tarball of preloaded images
	I0520 06:00:44.143140   10256 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 06:00:44.143684   10256 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 06:00:44.143738   10256 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-649400\config.json ...
	I0520 06:00:44.145195   10256 start.go:360] acquireMachinesLock for running-upgrade-649400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 06:05:34.273075   10256 start.go:364] duration metric: took 4m50.1270982s to acquireMachinesLock for "running-upgrade-649400"
	I0520 06:05:34.273075   10256 start.go:96] Skipping create...Using existing machine configuration
	I0520 06:05:34.273075   10256 fix.go:54] fixHost starting: 
	I0520 06:05:34.274079   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:05:36.644893   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:05:36.644939   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:36.644939   10256 fix.go:112] recreateIfNeeded on running-upgrade-649400: state=Running err=<nil>
	W0520 06:05:36.644939   10256 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 06:05:36.648494   10256 out.go:177] * Updating the running hyperv "running-upgrade-649400" VM ...
	I0520 06:05:36.652691   10256 machine.go:94] provisionDockerMachine start ...
	I0520 06:05:36.652691   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:05:39.071307   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:05:39.071386   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:39.071462   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:05:42.021256   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:05:42.022203   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:42.028899   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:05:42.029735   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:05:42.029735   10256 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 06:05:42.208629   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-649400
	
	I0520 06:05:42.208737   10256 buildroot.go:166] provisioning hostname "running-upgrade-649400"
	I0520 06:05:42.208737   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:05:44.726823   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:05:44.726924   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:44.727005   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:05:47.441103   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:05:47.442091   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:47.448763   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:05:47.449508   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:05:47.449577   10256 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-649400 && echo "running-upgrade-649400" | sudo tee /etc/hostname
	I0520 06:05:47.626682   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-649400
	
	I0520 06:05:47.626682   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:05:50.003473   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:05:50.003796   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:50.003968   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:05:52.979514   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:05:52.979798   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:52.986874   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:05:52.987708   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:05:52.987708   10256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-649400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-649400/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-649400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 06:05:53.131737   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:05:53.131737   10256 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 06:05:53.131737   10256 buildroot.go:174] setting up certificates
	I0520 06:05:53.131737   10256 provision.go:84] configureAuth start
	I0520 06:05:53.131737   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:05:55.488421   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:05:55.488421   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:55.488421   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:05:58.301062   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:05:58.321101   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:05:58.321163   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:00.663095   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:00.663567   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:00.663567   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:03.514808   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:03.514808   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:03.515718   10256 provision.go:143] copyHostCerts
	I0520 06:06:03.516174   10256 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 06:06:03.516174   10256 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 06:06:03.516765   10256 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 06:06:03.518210   10256 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 06:06:03.518210   10256 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 06:06:03.518210   10256 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 06:06:03.519478   10256 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 06:06:03.520025   10256 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 06:06:03.520449   10256 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 06:06:03.521596   10256 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-649400 san=[127.0.0.1 172.25.241.47 localhost minikube running-upgrade-649400]
	I0520 06:06:03.953591   10256 provision.go:177] copyRemoteCerts
	I0520 06:06:03.966581   10256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 06:06:03.966581   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:06.467586   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:06.467586   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:06.467699   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:09.296815   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:09.296815   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:09.297179   10256 sshutil.go:53] new ssh client: &{IP:172.25.241.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400\id_rsa Username:docker}
	I0520 06:06:09.415384   10256 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.447729s)
	I0520 06:06:09.415524   10256 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 06:06:09.474137   10256 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 06:06:09.520915   10256 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 06:06:09.577334   10256 provision.go:87] duration metric: took 16.4455534s to configureAuth
	I0520 06:06:09.577762   10256 buildroot.go:189] setting minikube options for container-runtime
	I0520 06:06:09.578392   10256 config.go:182] Loaded profile config "running-upgrade-649400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 06:06:09.578392   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:11.945765   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:11.945825   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:11.945901   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:14.771942   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:14.772018   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:14.777613   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:14.777690   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:06:14.777690   10256 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 06:06:14.929250   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 06:06:14.929403   10256 buildroot.go:70] root file system type: tmpfs
	I0520 06:06:14.929624   10256 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 06:06:14.929697   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:17.310145   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:17.310726   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:17.310795   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:20.165145   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:20.165368   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:20.172079   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:20.173121   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:06:20.173121   10256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 06:06:20.356505   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 06:06:20.356505   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:22.719687   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:22.719726   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:22.719847   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:25.554832   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:25.554927   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:25.561183   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:25.561792   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:06:25.561871   10256 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 06:06:25.716012   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:06:25.716012   10256 machine.go:97] duration metric: took 49.0631897s to provisionDockerMachine
	I0520 06:06:25.716012   10256 start.go:293] postStartSetup for "running-upgrade-649400" (driver="hyperv")
	I0520 06:06:25.716012   10256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 06:06:25.731510   10256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 06:06:25.731510   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:28.087018   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:28.087258   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:28.087415   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:30.891810   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:30.892079   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:30.892079   10256 sshutil.go:53] new ssh client: &{IP:172.25.241.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400\id_rsa Username:docker}
	I0520 06:06:31.016241   10256 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2845315s)
	I0520 06:06:31.030179   10256 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 06:06:31.037646   10256 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 06:06:31.037751   10256 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 06:06:31.038289   10256 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 06:06:31.038809   10256 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 06:06:31.054089   10256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 06:06:31.074705   10256 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 06:06:31.129499   10256 start.go:296] duration metric: took 5.4134093s for postStartSetup
	I0520 06:06:31.129573   10256 fix.go:56] duration metric: took 56.8563453s for fixHost
	I0520 06:06:31.129634   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:33.748781   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:33.748781   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:33.748781   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:36.793357   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:36.793918   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:36.801056   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:36.801557   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:06:36.801557   10256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 06:06:36.955556   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210396.960221697
	
	I0520 06:06:36.955636   10256 fix.go:216] guest clock: 1716210396.960221697
	I0520 06:06:36.955636   10256 fix.go:229] Guest: 2024-05-20 06:06:36.960221697 -0700 PDT Remote: 2024-05-20 06:06:31.129573 -0700 PDT m=+353.023572801 (delta=5.830648697s)
	I0520 06:06:36.955793   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:39.543719   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:39.543873   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:39.544042   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:42.419597   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:42.419597   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:42.426207   10256 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:42.426743   10256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.47 22 <nil> <nil>}
	I0520 06:06:42.426743   10256 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716210396
	I0520 06:06:42.594166   10256 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 13:06:36 UTC 2024
	
	I0520 06:06:42.594166   10256 fix.go:236] clock set: Mon May 20 13:06:36 UTC 2024
	 (err=<nil>)
	I0520 06:06:42.594257   10256 start.go:83] releasing machines lock for "running-upgrade-649400", held for 1m8.3209081s
	I0520 06:06:42.594543   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:45.229348   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:45.229458   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:45.229552   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:48.341055   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:48.341197   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:48.347682   10256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 06:06:48.347852   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:48.372501   10256 ssh_runner.go:195] Run: cat /version.json
	I0520 06:06:48.372501   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:06:51.129388   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:51.129483   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:51.129483   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:51.129483   10256 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:51.129483   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:51.130078   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-649400 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:54.386553   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:54.386553   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:54.387552   10256 sshutil.go:53] new ssh client: &{IP:172.25.241.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400\id_rsa Username:docker}
	I0520 06:06:54.432624   10256 main.go:141] libmachine: [stdout =====>] : 172.25.241.47
	
	I0520 06:06:54.432624   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:54.432624   10256 sshutil.go:53] new ssh client: &{IP:172.25.241.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400\id_rsa Username:docker}
	I0520 06:07:05.246437   10256 ssh_runner.go:235] Completed: cat /version.json: (16.8738206s)
	I0520 06:07:05.246437   10256 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (16.8987098s)
	W0520 06:07:05.246437   10256 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	W0520 06:07:05.246437   10256 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0520 06:07:05.246634   10256 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0520 06:07:05.246634   10256 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0520 06:07:05.263688   10256 ssh_runner.go:195] Run: systemctl --version
	I0520 06:07:05.294583   10256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 06:07:05.306529   10256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 06:07:05.324301   10256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 06:07:05.362296   10256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 06:07:05.400280   10256 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 06:07:05.400403   10256 start.go:494] detecting cgroup driver to use...
	I0520 06:07:05.400853   10256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:07:05.451971   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 06:07:05.488298   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 06:07:05.512377   10256 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 06:07:05.534634   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 06:07:05.573571   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:07:05.605862   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 06:07:05.642991   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:07:05.678361   10256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 06:07:05.711308   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 06:07:05.743124   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 06:07:05.775138   10256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 06:07:05.813995   10256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 06:07:05.856265   10256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 06:07:05.915993   10256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:07:06.232793   10256 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 06:07:06.264113   10256 start.go:494] detecting cgroup driver to use...
	I0520 06:07:06.278593   10256 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 06:07:06.321228   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:07:06.358418   10256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 06:07:06.398419   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:07:06.436043   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 06:07:06.459121   10256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:07:06.509117   10256 ssh_runner.go:195] Run: which cri-dockerd
	I0520 06:07:06.530867   10256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 06:07:06.551140   10256 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 06:07:06.597611   10256 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 06:07:06.932902   10256 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 06:07:07.208924   10256 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 06:07:07.209413   10256 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 06:07:07.263523   10256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:07:07.542851   10256 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 06:07:15.921587   10256 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.3787132s)
	I0520 06:07:15.937215   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 06:07:15.973591   10256 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 06:07:16.039406   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 06:07:16.076622   10256 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 06:07:16.295621   10256 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 06:07:16.523456   10256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:07:16.725731   10256 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 06:07:16.778477   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 06:07:16.816226   10256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:07:17.046940   10256 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 06:07:17.240862   10256 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 06:07:17.257076   10256 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 06:07:17.267004   10256 start.go:562] Will wait 60s for crictl version
	I0520 06:07:17.280979   10256 ssh_runner.go:195] Run: which crictl
	I0520 06:07:17.305097   10256 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 06:07:17.366945   10256 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 06:07:17.378354   10256 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 06:07:17.442816   10256 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 06:07:17.500870   10256 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 06:07:17.501069   10256 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 06:07:17.506465   10256 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 06:07:17.506573   10256 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 06:07:17.506612   10256 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 06:07:17.506903   10256 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 06:07:17.509975   10256 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 06:07:17.509975   10256 ip.go:210] interface addr: 172.25.240.1/20
	I0520 06:07:17.523959   10256 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 06:07:17.534785   10256 kubeadm.go:877] updating cluster {Name:running-upgrade-649400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-649400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.241.47 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 06:07:17.534785   10256 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 06:07:17.546405   10256 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 06:07:17.592392   10256 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 06:07:17.592392   10256 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 06:07:17.606392   10256 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 06:07:17.639070   10256 ssh_runner.go:195] Run: which lz4
	I0520 06:07:17.659643   10256 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 06:07:17.668109   10256 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 06:07:17.668321   10256 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes)
	I0520 06:07:20.277709   10256 docker.go:649] duration metric: took 2.6318636s to copy over tarball
	I0520 06:07:20.291350   10256 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 06:10:05.442898   10256 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2m45.1511087s)
	I0520 06:10:05.443172   10256 kubeadm.go:903] preload failed, will try to load cached images: extracting tarball: : sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: wait: remote command exited without exit status or exit signal
	stdout:
	
	stderr:
	I0520 06:10:05.454971   10256 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	W0520 06:10:05.454971   10256 ssh_runner.go:129] session error, resetting client: read tcp 172.25.240.1:63378->172.25.241.47:22: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
	I0520 06:10:05.455305   10256 retry.go:31] will retry after 350.442582ms: read tcp 172.25.240.1:63378->172.25.241.47:22: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
	I0520 06:10:05.806664   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:08.299326   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:08.299747   10256 main.go:141] libmachine: [stderr =====>] : 
	W0520 06:10:08.299869   10256 docker.go:676] NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	I0520 06:10:08.299869   10256 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 06:10:08.319260   10256 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 06:10:08.328573   10256 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 06:10:08.329499   10256 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 06:10:08.335665   10256 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 06:10:08.336694   10256 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 06:10:08.338661   10256 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 06:10:08.339676   10256 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 06:10:08.343679   10256 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 06:10:08.348099   10256 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 06:10:08.350110   10256 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 06:10:08.351106   10256 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 06:10:08.360097   10256 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 06:10:08.361081   10256 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 06:10:08.361081   10256 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 06:10:08.363093   10256 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 06:10:08.367086   10256 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	W0520 06:10:08.468277   10256 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0520 06:10:08.563825   10256 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0520 06:10:08.669902   10256 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.24.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0520 06:10:08.780454   10256 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.24.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0520 06:10:08.832344   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 06:10:08.832344   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	W0520 06:10:08.872698   10256 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.8.6 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0520 06:10:08.874442   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 06:10:08.874442   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:08.931094   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 06:10:08.932096   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	W0520 06:10:08.984314   10256 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.24.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0520 06:10:09.033735   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 06:10:09.033735   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:09.093472   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 06:10:09.093472   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	W0520 06:10:09.121840   10256 image.go:187] authn lookup for registry.k8s.io/pause:3.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0520 06:10:09.277811   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 06:10:09.277811   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	W0520 06:10:09.282815   10256 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.24.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0520 06:10:09.327823   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 06:10:09.327823   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:09.603520   10256 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 06:10:09.603520   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.325732   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.325732   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.325732   10256 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0520 06:10:13.325732   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0
	I0520 06:10:13.325732   10256 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 06:10:13.345855   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 06:10:13.345855   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.508402   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.509386   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.510387   10256 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0520 06:10:13.510387   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.24.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.1
	I0520 06:10:13.510387   10256 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 06:10:13.515392   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.516396   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.516396   10256 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 06:10:13.516396   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0520 06:10:13.516396   10256 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 06:10:13.537058   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 06:10:13.537146   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.538783   10256 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 06:10:13.538783   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.580890   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.580890   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.580890   10256 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0520 06:10:13.580890   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.24.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.1
	I0520 06:10:13.580890   10256 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 06:10:13.606018   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 06:10:13.606018   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.849080   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.849152   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.849338   10256 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0520 06:10:13.849338   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6
	I0520 06:10:13.849473   10256 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 06:10:13.868358   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 06:10:13.868358   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.881364   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.881364   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.881364   10256 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0520 06:10:13.881364   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.24.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.1
	I0520 06:10:13.881364   10256 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 06:10:13.904059   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 06:10:13.904059   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:13.909645   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:13.909645   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:13.909879   10256 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0520 06:10:13.909948   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7
	I0520 06:10:13.909993   10256 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 06:10:13.948971   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 06:10:13.949972   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:14.784850   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:14.784850   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:14.784850   10256 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0520 06:10:14.784850   10256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.24.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.1
	I0520 06:10:14.784850   10256 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 06:10:14.800856   10256 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 06:10:14.800856   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:18.050783   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.050783   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.113789   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.113789   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.175415   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.175415   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.236994   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.236994   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.345176   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.345333   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.345176   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.345333   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:18.550813   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:18.550950   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:19.282297   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:19.282297   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:19.282653   10256 cache_images.go:92] duration metric: took 10.9827553s to LoadCachedImages
	W0520 06:10:19.282845   10256 out.go:239] X Unable to load cached images: loading cached images: removing image: remove image docker: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	X Unable to load cached images: loading cached images: removing image: remove image docker: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	I0520 06:10:19.282845   10256 kubeadm.go:928] updating node { 172.25.241.47 8443 v1.24.1 docker true true} ...
	I0520 06:10:19.282845   10256 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-649400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.241.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-649400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 06:10:19.295202   10256 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 06:10:19.295202   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:21.716381   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:21.717341   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:21.731336   10256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 06:10:21.731336   10256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-649400 ).state
	I0520 06:10:24.072608   10256 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0520 06:10:24.072608   10256 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:10:24.076979   10256 out.go:177] 
	W0520 06:10:24.079308   10256 out.go:239] X Exiting due to K8S_INSTALL_FAILED_CONTAINER_RUNTIME_NOT_RUNNING: Failed to update cluster: update primary control-plane node: generating kubeadm cfg: container runtime is not running
	X Exiting due to K8S_INSTALL_FAILED_CONTAINER_RUNTIME_NOT_RUNNING: Failed to update cluster: update primary control-plane node: generating kubeadm cfg: container runtime is not running
	W0520 06:10:24.079368   10256 out.go:239] * 
	* 
	W0520 06:10:24.080729   10256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 06:10:24.083358   10256 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-649400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 93
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-20 06:10:24.2585993 -0700 PDT m=+10179.927220501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-649400 -n running-upgrade-649400
E0520 06:10:25.071481    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-649400 -n running-upgrade-649400: exit status 7 (2.6387218s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:10:24.380087    9576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-649400" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-649400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-649400
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p running-upgrade-649400: exit status 81 (2.915414s)

                                                
                                                
-- stdout --
	* Removing C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:10:27.012389    9332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_FILE_IN_USE: remove C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-649400\disk.vhd: The process cannot access the file because it is being used by another process.
	* Suggestion: Another program is using a file required by minikube. If you are using Hyper-V, try stopping the minikube VM from within the Hyper-V manager
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/
	* Related issue: https://github.com/kubernetes/minikube/issues/7300

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: exit status 81
--- FAIL: TestRunningBinaryUpgrade (1087.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (1585.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (6m7.9037067s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-771300
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-771300: (41.5098231s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-771300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-771300 status --format={{.Host}}: exit status 7 (2.541523s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:54:11.250275   13408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0520 05:55:25.059162    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (8m1.0453393s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-771300 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (236.8345ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-771300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:02:15.034994    4104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-771300
	    minikube start -p kubernetes-upgrade-771300 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7713002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-771300 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0520 06:02:47.840483    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 06:03:04.570509    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (7m6.3953444s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-771300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-771300" primary control-plane node in "kubernetes-upgrade-771300" cluster
	* Updating the running hyperv "kubernetes-upgrade-771300" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:02:15.278082    7224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 06:02:15.280032    7224 out.go:291] Setting OutFile to fd 1384 ...
	I0520 06:02:15.281087    7224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:02:15.281148    7224 out.go:304] Setting ErrFile to fd 1928...
	I0520 06:02:15.281209    7224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:02:15.318131    7224 out.go:298] Setting JSON to false
	I0520 06:02:15.322878    7224 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10131,"bootTime":1716200003,"procs":213,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 06:02:15.323972    7224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 06:02:15.326954    7224 out.go:177] * [kubernetes-upgrade-771300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 06:02:15.330953    7224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 06:02:15.330594    7224 notify.go:220] Checking for updates...
	I0520 06:02:15.333808    7224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 06:02:15.336420    7224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 06:02:15.339075    7224 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 06:02:15.341369    7224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 06:02:15.344756    7224 config.go:182] Loaded profile config "kubernetes-upgrade-771300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:02:15.346073    7224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 06:02:21.378536    7224 out.go:177] * Using the hyperv driver based on existing profile
	I0520 06:02:21.381494    7224 start.go:297] selected driver: hyperv
	I0520 06:02:21.381494    7224 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-771300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-771300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.1 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 06:02:21.381494    7224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 06:02:21.436376    7224 cni.go:84] Creating CNI manager for ""
	I0520 06:02:21.436376    7224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 06:02:21.436376    7224 start.go:340] cluster config:
	{Name:kubernetes-upgrade-771300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-771300 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.246.1 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 06:02:21.437433    7224 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 06:02:21.441275    7224 out.go:177] * Starting "kubernetes-upgrade-771300" primary control-plane node in "kubernetes-upgrade-771300" cluster
	I0520 06:02:21.444452    7224 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 06:02:21.444452    7224 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 06:02:21.444452    7224 cache.go:56] Caching tarball of preloaded images
	I0520 06:02:21.444971    7224 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 06:02:21.445095    7224 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 06:02:21.445427    7224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-771300\config.json ...
	I0520 06:02:21.448723    7224 start.go:360] acquireMachinesLock for kubernetes-upgrade-771300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 06:06:42.594543    7224 start.go:364] duration metric: took 4m21.1449903s to acquireMachinesLock for "kubernetes-upgrade-771300"
	I0520 06:06:42.594964    7224 start.go:96] Skipping create...Using existing machine configuration
	I0520 06:06:42.594987    7224 fix.go:54] fixHost starting: 
	I0520 06:06:42.595783    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:06:45.192347    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:45.192347    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:45.192347    7224 fix.go:112] recreateIfNeeded on kubernetes-upgrade-771300: state=Running err=<nil>
	W0520 06:06:45.192347    7224 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 06:06:45.195689    7224 out.go:177] * Updating the running hyperv "kubernetes-upgrade-771300" VM ...
	I0520 06:06:45.199050    7224 machine.go:94] provisionDockerMachine start ...
	I0520 06:06:45.199050    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:06:47.845568    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:47.846347    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:47.846426    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:51.062095    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:06:51.062207    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:51.069137    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:51.069137    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:06:51.069137    7224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 06:06:51.212529    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-771300
	
	I0520 06:06:51.212625    7224 buildroot.go:166] provisioning hostname "kubernetes-upgrade-771300"
	I0520 06:06:51.212756    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:06:53.936480    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:53.936538    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:53.936538    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:06:56.846697    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:06:56.846697    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:56.853193    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:06:56.853746    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:06:56.853746    7224 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-771300 && echo "kubernetes-upgrade-771300" | sudo tee /etc/hostname
	I0520 06:06:57.045480    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-771300
	
	I0520 06:06:57.045687    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:06:59.494556    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:06:59.494556    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:06:59.494556    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:02.333739    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:02.333910    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:02.339807    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:02.340441    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:02.340441    7224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-771300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-771300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-771300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 06:07:02.490902    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:07:02.490993    7224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 06:07:02.490993    7224 buildroot.go:174] setting up certificates
	I0520 06:07:02.490993    7224 provision.go:84] configureAuth start
	I0520 06:07:02.491087    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:04.833983    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:04.833983    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:04.833983    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:07.823394    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:07.823394    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:07.823394    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:10.319564    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:10.319564    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:10.319564    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:13.335075    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:13.335075    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:13.335075    7224 provision.go:143] copyHostCerts
	I0520 06:07:13.336085    7224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 06:07:13.336085    7224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 06:07:13.336085    7224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 06:07:13.338075    7224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 06:07:13.338075    7224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 06:07:13.338075    7224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 06:07:13.340076    7224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 06:07:13.340076    7224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 06:07:13.340076    7224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 06:07:13.341073    7224 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-771300 san=[127.0.0.1 172.25.246.1 kubernetes-upgrade-771300 localhost minikube]
	I0520 06:07:13.876651    7224 provision.go:177] copyRemoteCerts
	I0520 06:07:13.892861    7224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 06:07:13.892861    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:16.326226    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:16.326226    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:16.326795    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:19.261730    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:19.261730    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:19.261730    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:07:19.376810    7224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.4839337s)
	I0520 06:07:19.376810    7224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 06:07:19.434815    7224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 06:07:19.494803    7224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 06:07:19.569634    7224 provision.go:87] duration metric: took 17.0785948s to configureAuth
	I0520 06:07:19.569734    7224 buildroot.go:189] setting minikube options for container-runtime
	I0520 06:07:19.570220    7224 config.go:182] Loaded profile config "kubernetes-upgrade-771300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:07:19.570220    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:22.061683    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:22.062023    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:22.062147    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:24.910257    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:24.910257    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:24.919816    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:24.919816    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:24.919816    7224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 06:07:25.069205    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 06:07:25.069205    7224 buildroot.go:70] root file system type: tmpfs
	I0520 06:07:25.069762    7224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 06:07:25.069882    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:27.510412    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:27.510564    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:27.510717    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:30.402466    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:30.402466    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:30.409916    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:30.410689    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:30.410689    7224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 06:07:30.572516    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 06:07:30.572673    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:34.175896    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:34.176005    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:34.176077    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:37.130656    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:37.130656    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:37.137114    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:37.137982    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:37.138075    7224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 06:07:37.302832    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:07:37.302832    7224 machine.go:97] duration metric: took 52.1036405s to provisionDockerMachine
	I0520 06:07:37.302832    7224 start.go:293] postStartSetup for "kubernetes-upgrade-771300" (driver="hyperv")
	I0520 06:07:37.302832    7224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 06:07:37.317853    7224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 06:07:37.317853    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:39.729131    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:39.729131    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:39.729131    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:42.587895    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:42.587895    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:42.588901    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:07:42.725541    7224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.4075301s)
	I0520 06:07:42.740429    7224 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 06:07:42.747693    7224 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 06:07:42.747693    7224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 06:07:42.747998    7224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 06:07:42.749389    7224 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 06:07:42.761376    7224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 06:07:42.782861    7224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 06:07:42.833393    7224 start.go:296] duration metric: took 5.5305046s for postStartSetup
	I0520 06:07:42.833550    7224 fix.go:56] duration metric: took 1m0.2384012s for fixHost
	I0520 06:07:42.833759    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:45.174738    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:45.174738    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:45.174993    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:48.039748    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:48.039748    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:48.046416    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:48.047233    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:48.047233    7224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 06:07:48.179733    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210468.170621335
	
	I0520 06:07:48.179733    7224 fix.go:216] guest clock: 1716210468.170621335
	I0520 06:07:48.179733    7224 fix.go:229] Guest: 2024-05-20 06:07:48.170621335 -0700 PDT Remote: 2024-05-20 06:07:42.833625 -0700 PDT m=+327.650014501 (delta=5.336996335s)
	I0520 06:07:48.179733    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:50.617232    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:50.618031    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:50.618117    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:53.542797    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:53.543816    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:53.548782    7224 main.go:141] libmachine: Using SSH client type: native
	I0520 06:07:53.548782    7224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.246.1 22 <nil> <nil>}
	I0520 06:07:53.548782    7224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716210468
	I0520 06:07:53.718382    7224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 13:07:48 UTC 2024
	
	I0520 06:07:53.718382    7224 fix.go:236] clock set: Mon May 20 13:07:48 UTC 2024
	 (err=<nil>)
	I0520 06:07:53.718382    7224 start.go:83] releasing machines lock for "kubernetes-upgrade-771300", held for 1m11.1234883s
	I0520 06:07:53.719001    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:59.432180    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:59.432348    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:59.440003    7224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 06:07:59.440262    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:59.461616    7224 ssh_runner.go:195] Run: cat /version.json
	I0520 06:07:59.461616    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:02.187648    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:02.187648    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:02.188254    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:05.497526    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:08:05.497690    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:05.497690    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:08:05.531219    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:08:05.531219    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:05.531219    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:08:05.617153    7224 ssh_runner.go:235] Completed: cat /version.json: (6.1555208s)
	W0520 06:08:05.617567    7224 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 06:08:05.639188    7224 ssh_runner.go:195] Run: systemctl --version
	I0520 06:08:07.637072    7224 ssh_runner.go:235] Completed: systemctl --version: (1.9968463s)
	I0520 06:08:07.637072    7224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (8.1969667s)
	W0520 06:08:07.637072    7224 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0520 06:08:07.637072    7224 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0520 06:08:07.637072    7224 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0520 06:08:07.651984    7224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 06:08:07.660351    7224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 06:08:07.673355    7224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 06:08:07.704359    7224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 06:08:07.735684    7224 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 06:08:07.735684    7224 start.go:494] detecting cgroup driver to use...
	I0520 06:08:07.735684    7224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:08:07.795582    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 06:08:07.833461    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 06:08:07.859414    7224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 06:08:07.873573    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 06:08:07.909376    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:08:07.947228    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 06:08:07.981758    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:08:08.019759    7224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 06:08:08.060329    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 06:08:08.098678    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 06:08:08.137260    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 06:08:08.172644    7224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 06:08:08.212344    7224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 06:08:08.247674    7224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:08:08.560332    7224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 06:08:08.600228    7224 start.go:494] detecting cgroup driver to use...
	I0520 06:08:08.615248    7224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 06:08:08.659663    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:08:08.704285    7224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 06:08:08.771331    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:08:08.812126    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 06:08:08.840605    7224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:08:08.891771    7224 ssh_runner.go:195] Run: which cri-dockerd
	I0520 06:08:08.912256    7224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 06:08:08.932933    7224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 06:08:08.988990    7224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 06:08:09.322017    7224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 06:08:09.615305    7224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 06:08:09.615305    7224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 06:08:09.668511    7224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:08:09.957124    7224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 06:09:21.404284    7224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4469699s)
	I0520 06:09:21.419140    7224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 06:09:21.510998    7224 out.go:177] 
	W0520 06:09:21.513447    7224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 13:00:59 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.037443074Z" level=info msg="Starting up"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.038319876Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.039637079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.080460865Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110587729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110768930Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110866230Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110885330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113200235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113305435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113571636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113678936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113702736Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113716136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114298537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114962239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118680547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118813447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119026047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119120547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119814849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119853049Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119869249Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122547655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122663855Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122691455Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122763655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122786055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122863655Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123307456Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123604557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123777457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123801757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123834957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123866058Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123885758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123903058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123919358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123934958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123948658Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123973158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124002458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124018458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124051258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124093058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124128358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124144058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124157758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124172358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124187058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124221458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124238458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124252458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124272558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124307558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124365059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124385659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124400159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124509659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124855060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124954260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124976560Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125051060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125093360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125108260Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125636761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125886262Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126052562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126155962Z" level=info msg="containerd successfully booted in 0.049423s"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.094250311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.243002207Z" level=info msg="Loading containers: start."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.691460709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.773211320Z" level=info msg="Loading containers: done."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800062926Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800884720Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:01 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858529005Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858735403Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.828686794Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:29 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830122681Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830492278Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830752675Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830763675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.922757229Z" level=info msg="Starting up"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.924099117Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.925255106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1140
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.959361892Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986473143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986577442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986621042Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986637041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986661041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986673441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986904939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987092937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987151337Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987164237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987191236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987342235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990420507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990514306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990659804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990693504Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990720604Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990737504Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990748904Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991134000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991247399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991270199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991286199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991300698Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991348298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991628495Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991774594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991871793Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991893493Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991907193Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991921193Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991938293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992008292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992025792Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992085791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992104491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992116191Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992135791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992151591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992164491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992177490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992190390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992203790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992216290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992229090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992249690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992268890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992281689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992294389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992306689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992325889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992347389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992366589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992378589Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992496987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992573887Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992588687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992599587Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992678386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992699586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992714085Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993197881Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993278580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993323180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993360380Z" level=info msg="containerd successfully booted in 0.035865s"
	May 20 13:01:31 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:31.978697315Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.009201834Z" level=info msg="Loading containers: start."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.370455711Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.458096904Z" level=info msg="Loading containers: done."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482420880Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482579679Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527803163Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527893462Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:32 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:45 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.842855467Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.844484152Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846105637Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846235336Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846306935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.927287891Z" level=info msg="Starting up"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.928556979Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.932662041Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1552
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:46.973318967Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.004916376Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005115875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005190374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005209174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005241973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005256673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005627270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005729169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005750369Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005761769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005789968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005940767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009212237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009311836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009455835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009597533Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009638133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009660533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009672133Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009829431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009994130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010016629Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010037029Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010100629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010152028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010426026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010636224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010728123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010747823Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010761523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010774122Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010791222Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010806122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010823922Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010836822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010849322Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010860422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010898421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010935021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010965721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010979621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011010720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011035920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011086920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011101519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011117219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011133119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011145219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011158819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011176419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011193919Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011215518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011230618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011242518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011407717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011453116Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011466316Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011478916Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011738814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011823713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011855013Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012273509Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012414407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012484007Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012504107Z" level=info msg="containerd successfully booted in 0.042391s"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:47.981876489Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.476586837Z" level=info msg="Loading containers: start."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.807559492Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.905307593Z" level=info msg="Loading containers: done."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931221455Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931373653Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978281022Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978438320Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:48 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.130773795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133253132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133393534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178014301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178177603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178210804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178417807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.230581685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231234395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231354497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231550800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250397781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250502383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250544683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.254253139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709704838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709805940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709832240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709950942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821455107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821982715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822128117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822516823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847471995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847934702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848103805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848306708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.892798372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893151577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893329380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.894125192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774904743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774982243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775000443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775222145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820259418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820446919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820640521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820917323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882192130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882425332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882512233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882741135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514251213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514495615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514689617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.515177520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682587451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682743552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682763852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682947454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010186431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010291745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010307348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010569583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:13.912132740Z" level=info msg="ignoring event" container=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914336490Z" level=info msg="shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914712433Z" level=warning msg="cleaning up after shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914778040Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216865467Z" level=info msg="shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216950077Z" level=warning msg="cleaning up after shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216967979Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:14.218894194Z" level=info msg="ignoring event" container=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.301813261Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796412739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796820785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796885692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.797244032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309100786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309417221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309522833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309832967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607638677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607740188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607762091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.609687203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.704693802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705133551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705185357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705558298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.101434997Z" level=info msg="shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.101692603Z" level=info msg="ignoring event" container=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103346538Z" level=warning msg="cleaning up after shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103515142Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.281857359Z" level=info msg="ignoring event" container=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282839880Z" level=info msg="shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282987083Z" level=warning msg="cleaning up after shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.283004584Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:31.876849124Z" level=info msg="ignoring event" container=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.876684723Z" level=info msg="shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.877874330Z" level=warning msg="cleaning up after shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.878129431Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.905305679Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348168231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348227132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348239532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348380632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.856310828Z" level=info msg="shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:04:11.857238831Z" level=info msg="ignoring event" container=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858242934Z" level=warning msg="cleaning up after shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858434834Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538235984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538449684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538488584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538656385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:06:11.870709878Z" level=info msg="ignoring event" container=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874226381Z" level=info msg="shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874612781Z" level=warning msg="cleaning up after shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874682181Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.145912696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146016996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146467497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146923397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:01.931343263Z" level=info msg="ignoring event" container=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.934941035Z" level=info msg="shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935393532Z" level=warning msg="cleaning up after shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935498231Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252177659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252290558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252734354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.253607748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:09 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:09.973002592Z" level=info msg="Processing signal 'terminated'"
	May 20 13:08:09 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.230142085Z" level=info msg="ignoring event" container=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.232967063Z" level=info msg="shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233137461Z" level=warning msg="cleaning up after shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233189961Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247023453Z" level=info msg="shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247144852Z" level=warning msg="cleaning up after shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247157752Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.247372450Z" level=info msg="ignoring event" container=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.250825123Z" level=info msg="ignoring event" container=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251279720Z" level=info msg="shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251393319Z" level=warning msg="cleaning up after shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251506218Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.274547738Z" level=info msg="ignoring event" container=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275660229Z" level=info msg="shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275766329Z" level=warning msg="cleaning up after shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275945927Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.299742941Z" level=info msg="shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302401521Z" level=info msg="ignoring event" container=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302824617Z" level=info msg="ignoring event" container=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.319605986Z" level=warning msg="cleaning up after shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.327187827Z" level=info msg="ignoring event" container=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328461417Z" level=info msg="ignoring event" container=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328775015Z" level=info msg="ignoring event" container=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.331314295Z" level=info msg="ignoring event" container=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.336717753Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.340174826Z" level=info msg="ignoring event" container=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.320113582Z" level=info msg="shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340568023Z" level=warning msg="cleaning up after shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340992420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.350136848Z" level=info msg="ignoring event" container=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325335642Z" level=info msg="shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.351143140Z" level=info msg="shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354832711Z" level=warning msg="cleaning up after shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354848411Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.364943533Z" level=warning msg="cleaning up after shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.367236815Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331707292Z" level=info msg="shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376494842Z" level=warning msg="cleaning up after shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376813840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331643992Z" level=info msg="shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325355442Z" level=info msg="shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325794638Z" level=info msg="shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382439096Z" level=warning msg="cleaning up after shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382490296Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387660755Z" level=warning msg="cleaning up after shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387825654Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395710992Z" level=warning msg="cleaning up after shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395811492Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.095287980Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.118381400Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.153403626Z" level=info msg="ignoring event" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.156847699Z" level=info msg="shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.157645893Z" level=warning msg="cleaning up after shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.158003790Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.180419115Z" level=info msg="ignoring event" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183090995Z" level=info msg="shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183780289Z" level=warning msg="cleaning up after shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.184228486Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.270849910Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271569504Z" level=info msg="Daemon shutdown complete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271724103Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271760402Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Consumed 12.678s CPU time.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:08:21 kubernetes-upgrade-771300 dockerd[5574]: time="2024-05-20T13:08:21.358571201Z" level=info msg="Starting up"
	May 20 13:09:21 kubernetes-upgrade-771300 dockerd[5574]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 13:00:59 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.037443074Z" level=info msg="Starting up"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.038319876Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.039637079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.080460865Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110587729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110768930Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110866230Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110885330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113200235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113305435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113571636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113678936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113702736Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113716136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114298537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114962239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118680547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118813447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119026047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119120547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119814849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119853049Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119869249Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122547655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122663855Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122691455Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122763655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122786055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122863655Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123307456Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123604557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123777457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123801757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123834957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123866058Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123885758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123903058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123919358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123934958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123948658Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123973158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124002458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124018458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124051258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124093058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124128358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124144058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124157758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124172358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124187058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124221458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124238458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124252458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124272558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124307558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124365059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124385659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124400159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124509659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124855060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124954260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124976560Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125051060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125093360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125108260Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125636761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125886262Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126052562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126155962Z" level=info msg="containerd successfully booted in 0.049423s"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.094250311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.243002207Z" level=info msg="Loading containers: start."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.691460709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.773211320Z" level=info msg="Loading containers: done."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800062926Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800884720Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:01 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858529005Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858735403Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.828686794Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:29 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830122681Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830492278Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830752675Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830763675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.922757229Z" level=info msg="Starting up"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.924099117Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.925255106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1140
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.959361892Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986473143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986577442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986621042Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986637041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986661041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986673441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986904939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987092937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987151337Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987164237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987191236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987342235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990420507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990514306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990659804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990693504Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990720604Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990737504Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990748904Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991134000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991247399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991270199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991286199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991300698Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991348298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991628495Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991774594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991871793Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991893493Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991907193Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991921193Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991938293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992008292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992025792Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992085791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992104491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992116191Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992135791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992151591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992164491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992177490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992190390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992203790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992216290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992229090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992249690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992268890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992281689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992294389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992306689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992325889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992347389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992366589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992378589Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992496987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992573887Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992588687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992599587Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992678386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992699586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992714085Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993197881Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993278580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993323180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993360380Z" level=info msg="containerd successfully booted in 0.035865s"
	May 20 13:01:31 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:31.978697315Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.009201834Z" level=info msg="Loading containers: start."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.370455711Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.458096904Z" level=info msg="Loading containers: done."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482420880Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482579679Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527803163Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527893462Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:32 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:45 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.842855467Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.844484152Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846105637Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846235336Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846306935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.927287891Z" level=info msg="Starting up"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.928556979Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.932662041Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1552
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:46.973318967Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.004916376Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005115875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005190374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005209174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005241973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005256673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005627270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005729169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005750369Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005761769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005789968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005940767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009212237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009311836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009455835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009597533Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009638133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009660533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009672133Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009829431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009994130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010016629Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010037029Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010100629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010152028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010426026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010636224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010728123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010747823Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010761523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010774122Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010791222Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010806122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010823922Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010836822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010849322Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010860422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010898421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010935021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010965721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010979621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011010720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011035920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011086920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011101519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011117219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011133119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011145219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011158819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011176419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011193919Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011215518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011230618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011242518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011407717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011453116Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011466316Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011478916Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011738814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011823713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011855013Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012273509Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012414407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012484007Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012504107Z" level=info msg="containerd successfully booted in 0.042391s"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:47.981876489Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.476586837Z" level=info msg="Loading containers: start."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.807559492Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.905307593Z" level=info msg="Loading containers: done."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931221455Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931373653Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978281022Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978438320Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:48 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.130773795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133253132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133393534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178014301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178177603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178210804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178417807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.230581685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231234395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231354497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231550800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250397781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250502383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250544683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.254253139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709704838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709805940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709832240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709950942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821455107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821982715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822128117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822516823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847471995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847934702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848103805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848306708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.892798372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893151577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893329380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.894125192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774904743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774982243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775000443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775222145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820259418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820446919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820640521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820917323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882192130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882425332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882512233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882741135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514251213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514495615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514689617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.515177520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682587451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682743552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682763852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682947454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010186431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010291745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010307348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010569583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:13.912132740Z" level=info msg="ignoring event" container=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914336490Z" level=info msg="shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914712433Z" level=warning msg="cleaning up after shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914778040Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216865467Z" level=info msg="shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216950077Z" level=warning msg="cleaning up after shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216967979Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:14.218894194Z" level=info msg="ignoring event" container=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.301813261Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796412739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796820785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796885692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.797244032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309100786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309417221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309522833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309832967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607638677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607740188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607762091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.609687203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.704693802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705133551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705185357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705558298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.101434997Z" level=info msg="shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.101692603Z" level=info msg="ignoring event" container=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103346538Z" level=warning msg="cleaning up after shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103515142Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.281857359Z" level=info msg="ignoring event" container=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282839880Z" level=info msg="shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282987083Z" level=warning msg="cleaning up after shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.283004584Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:31.876849124Z" level=info msg="ignoring event" container=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.876684723Z" level=info msg="shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.877874330Z" level=warning msg="cleaning up after shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.878129431Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.905305679Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348168231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348227132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348239532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348380632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.856310828Z" level=info msg="shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:04:11.857238831Z" level=info msg="ignoring event" container=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858242934Z" level=warning msg="cleaning up after shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858434834Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538235984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538449684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538488584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538656385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:06:11.870709878Z" level=info msg="ignoring event" container=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874226381Z" level=info msg="shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874612781Z" level=warning msg="cleaning up after shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874682181Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.145912696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146016996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146467497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146923397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:01.931343263Z" level=info msg="ignoring event" container=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.934941035Z" level=info msg="shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935393532Z" level=warning msg="cleaning up after shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935498231Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252177659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252290558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252734354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.253607748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:09 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:09.973002592Z" level=info msg="Processing signal 'terminated'"
	May 20 13:08:09 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.230142085Z" level=info msg="ignoring event" container=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.232967063Z" level=info msg="shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233137461Z" level=warning msg="cleaning up after shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233189961Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247023453Z" level=info msg="shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247144852Z" level=warning msg="cleaning up after shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247157752Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.247372450Z" level=info msg="ignoring event" container=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.250825123Z" level=info msg="ignoring event" container=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251279720Z" level=info msg="shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251393319Z" level=warning msg="cleaning up after shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251506218Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.274547738Z" level=info msg="ignoring event" container=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275660229Z" level=info msg="shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275766329Z" level=warning msg="cleaning up after shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275945927Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.299742941Z" level=info msg="shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302401521Z" level=info msg="ignoring event" container=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302824617Z" level=info msg="ignoring event" container=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.319605986Z" level=warning msg="cleaning up after shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.327187827Z" level=info msg="ignoring event" container=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328461417Z" level=info msg="ignoring event" container=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328775015Z" level=info msg="ignoring event" container=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.331314295Z" level=info msg="ignoring event" container=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.336717753Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.340174826Z" level=info msg="ignoring event" container=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.320113582Z" level=info msg="shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340568023Z" level=warning msg="cleaning up after shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340992420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.350136848Z" level=info msg="ignoring event" container=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325335642Z" level=info msg="shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.351143140Z" level=info msg="shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354832711Z" level=warning msg="cleaning up after shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354848411Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.364943533Z" level=warning msg="cleaning up after shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.367236815Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331707292Z" level=info msg="shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376494842Z" level=warning msg="cleaning up after shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376813840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331643992Z" level=info msg="shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325355442Z" level=info msg="shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325794638Z" level=info msg="shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382439096Z" level=warning msg="cleaning up after shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382490296Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387660755Z" level=warning msg="cleaning up after shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387825654Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395710992Z" level=warning msg="cleaning up after shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395811492Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.095287980Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.118381400Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.153403626Z" level=info msg="ignoring event" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.156847699Z" level=info msg="shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.157645893Z" level=warning msg="cleaning up after shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.158003790Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.180419115Z" level=info msg="ignoring event" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183090995Z" level=info msg="shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183780289Z" level=warning msg="cleaning up after shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.184228486Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.270849910Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271569504Z" level=info msg="Daemon shutdown complete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271724103Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271760402Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Consumed 12.678s CPU time.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:08:21 kubernetes-upgrade-771300 dockerd[5574]: time="2024-05-20T13:08:21.358571201Z" level=info msg="Starting up"
	May 20 13:09:21 kubernetes-upgrade-771300 dockerd[5574]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 06:09:21.513447    7224 out.go:239] * 
	* 
	W0520 06:09:21.517005    7224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 06:09:21.520477    7224 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-771300 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-20 06:09:21.923147 -0700 PDT m=+10117.591935701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-771300 -n kubernetes-upgrade-771300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-771300 -n kubernetes-upgrade-771300: exit status 2 (13.5869867s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:09:22.052556    7688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-771300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-771300 logs -n 25: (2m46.9896325s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl status kubelet --all                       |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl cat kubelet                                |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl status docker --all                        |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl cat docker                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /etc/docker/daemon.json                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo docker                         | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | system info                                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl status cri-docker                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo cat                            | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo                                | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo find                           | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-450000 sudo crio                           | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-450000                                     | cilium-450000            | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT | 20 May 24 06:07 PDT |
	| start   | -p force-systemd-env-419600                          | force-systemd-env-419600 | minikube1\jenkins | v1.33.1 | 20 May 24 06:07 PDT |                     |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 06:07:59
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 06:07:59.307286    3476 out.go:291] Setting OutFile to fd 2008 ...
	I0520 06:07:59.307545    3476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:07:59.307545    3476 out.go:304] Setting ErrFile to fd 1960...
	I0520 06:07:59.307545    3476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 06:07:59.339220    3476 out.go:298] Setting JSON to false
	I0520 06:07:59.345311    3476 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10475,"bootTime":1716200003,"procs":213,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 06:07:59.345449    3476 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 06:07:59.351325    3476 out.go:177] * [force-systemd-env-419600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 06:07:59.355312    3476 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 06:07:59.354330    3476 notify.go:220] Checking for updates...
	I0520 06:07:59.358314    3476 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 06:07:59.362312    3476 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 06:07:59.364319    3476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 06:07:59.367316    3476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:56.318356    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:59.432180    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:07:59.432348    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:59.440003    7224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 06:07:59.440262    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:59.461616    7224 ssh_runner.go:195] Run: cat /version.json
	I0520 06:07:59.461616    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-771300 ).state
	I0520 06:07:56.326354    3508 machine.go:94] provisionDockerMachine start ...
	I0520 06:07:56.326354    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:07:58.911714    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:07:58.911714    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:07:58.911903    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:07:59.371318    3476 config.go:182] Loaded profile config "ha-291700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:07:59.371318    3476 config.go:182] Loaded profile config "kubernetes-upgrade-771300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:07:59.372318    3476 config.go:182] Loaded profile config "multinode-093300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:07:59.372318    3476 config.go:182] Loaded profile config "pause-325200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:07:59.373319    3476 config.go:182] Loaded profile config "running-upgrade-649400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 06:07:59.373319    3476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:02.184179    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:02.187648    7224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:02.187648    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:02.188254    7224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-771300 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:02.177532    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:02.177638    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:02.184179    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:02.185121    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:02.185121    3508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 06:08:02.372143    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-325200
	
	I0520 06:08:02.372143    3508 buildroot.go:166] provisioning hostname "pause-325200"
	I0520 06:08:02.372143    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:05.162739    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:05.162739    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:05.163652    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:06.017519    3476 out.go:177] * Using the hyperv driver based on user configuration
	I0520 06:08:06.021221    3476 start.go:297] selected driver: hyperv
	I0520 06:08:06.021221    3476 start.go:901] validating driver "hyperv" against <nil>
	I0520 06:08:06.021318    3476 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 06:08:06.080773    3476 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 06:08:06.082637    3476 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 06:08:06.082637    3476 cni.go:84] Creating CNI manager for ""
	I0520 06:08:06.082637    3476 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 06:08:06.082637    3476 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 06:08:06.083250    3476 start.go:340] cluster config:
	{Name:force-systemd-env-419600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-419600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0520 06:08:06.084080    3476 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 06:08:06.090339    3476 out.go:177] * Starting "force-systemd-env-419600" primary control-plane node in "force-systemd-env-419600" cluster
	I0520 06:08:06.092353    3476 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 06:08:06.093343    3476 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 06:08:06.093343    3476 cache.go:56] Caching tarball of preloaded images
	I0520 06:08:06.093343    3476 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0520 06:08:06.093343    3476 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 06:08:06.093343    3476 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-env-419600\config.json ...
	I0520 06:08:06.094329    3476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-env-419600\config.json: {Name:mkbd783df7dc9375f3c04b212b1143c07bfcb92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 06:08:06.095331    3476 start.go:360] acquireMachinesLock for force-systemd-env-419600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 06:08:05.497526    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:08:05.497690    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:05.497690    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:08:05.531219    7224 main.go:141] libmachine: [stdout =====>] : 172.25.246.1
	
	I0520 06:08:05.531219    7224 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:05.531219    7224 sshutil.go:53] new ssh client: &{IP:172.25.246.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-771300\id_rsa Username:docker}
	I0520 06:08:05.617153    7224 ssh_runner.go:235] Completed: cat /version.json: (6.1555208s)
	W0520 06:08:05.617567    7224 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 06:08:05.639188    7224 ssh_runner.go:195] Run: systemctl --version
	I0520 06:08:07.637072    7224 ssh_runner.go:235] Completed: systemctl --version: (1.9968463s)
	I0520 06:08:07.637072    7224 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (8.1969667s)
	W0520 06:08:07.637072    7224 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0520 06:08:07.637072    7224 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0520 06:08:07.637072    7224 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0520 06:08:07.651984    7224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 06:08:07.660351    7224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 06:08:07.673355    7224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 06:08:07.704359    7224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 06:08:07.735684    7224 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 06:08:07.735684    7224 start.go:494] detecting cgroup driver to use...
	I0520 06:08:07.735684    7224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:08:07.795582    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 06:08:07.833461    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 06:08:07.859414    7224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 06:08:07.873573    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 06:08:07.909376    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:08:07.947228    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 06:08:07.981758    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:08:08.019759    7224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 06:08:08.060329    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 06:08:08.098678    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 06:08:08.137260    7224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 06:08:08.172644    7224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 06:08:08.212344    7224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 06:08:08.247674    7224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:08:08.560332    7224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 06:08:08.600228    7224 start.go:494] detecting cgroup driver to use...
	I0520 06:08:08.615248    7224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 06:08:08.659663    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:08:08.704285    7224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 06:08:08.771331    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:08:08.812126    7224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 06:08:08.840605    7224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:08:08.891771    7224 ssh_runner.go:195] Run: which cri-dockerd
	I0520 06:08:08.912256    7224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 06:08:08.932933    7224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 06:08:08.988990    7224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 06:08:09.322017    7224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 06:08:09.615305    7224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 06:08:09.615305    7224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 06:08:09.668511    7224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:08:09.957124    7224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 06:08:07.847886    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:07.847962    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:07.852925    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:07.853930    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:07.853930    3508 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-325200 && echo "pause-325200" | sudo tee /etc/hostname
	I0520 06:08:08.062315    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-325200
	
	I0520 06:08:08.062315    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:10.414812    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:10.414812    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:10.414886    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:13.127882    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:13.127882    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:13.134140    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:13.134743    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:13.134743    3508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-325200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-325200/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-325200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 06:08:13.280918    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:08:13.281004    3508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0520 06:08:13.281092    3508 buildroot.go:174] setting up certificates
	I0520 06:08:13.281092    3508 provision.go:84] configureAuth start
	I0520 06:08:13.281092    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:15.565974    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:15.565974    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:15.565974    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:18.301595    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:18.301968    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:18.302056    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:20.575279    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:20.575279    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:20.575279    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:23.264354    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:23.264354    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:23.265284    3508 provision.go:143] copyHostCerts
	I0520 06:08:23.265755    3508 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0520 06:08:23.265826    3508 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0520 06:08:23.266348    3508 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0520 06:08:23.267495    3508 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0520 06:08:23.267495    3508 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0520 06:08:23.267495    3508 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0520 06:08:23.269073    3508 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0520 06:08:23.269167    3508 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0520 06:08:23.269420    3508 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0520 06:08:23.270105    3508 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-325200 san=[127.0.0.1 172.25.241.37 localhost minikube pause-325200]
	I0520 06:08:23.445078    3508 provision.go:177] copyRemoteCerts
	I0520 06:08:23.462274    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 06:08:23.462423    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:25.701872    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:25.701872    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:25.702981    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:28.417221    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:28.417309    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:28.417488    3508 sshutil.go:53] new ssh client: &{IP:172.25.241.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-325200\id_rsa Username:docker}
	I0520 06:08:28.531488    3508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0691081s)
	I0520 06:08:28.531589    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 06:08:28.584843    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0520 06:08:28.636062    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 06:08:28.686537    3508 provision.go:87] duration metric: took 15.4054032s to configureAuth
	I0520 06:08:28.686537    3508 buildroot.go:189] setting minikube options for container-runtime
	I0520 06:08:28.687389    3508 config.go:182] Loaded profile config "pause-325200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 06:08:28.687460    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:30.956502    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:30.956502    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:30.956599    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:33.717325    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:33.717325    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:33.723730    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:33.724450    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:33.724450    3508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 06:08:33.863212    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 06:08:33.863212    3508 buildroot.go:70] root file system type: tmpfs
	I0520 06:08:33.863212    3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 06:08:33.863582    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:36.158757    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:36.158757    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:36.158853    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:38.911062    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:38.911062    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:38.917560    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:38.918343    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:38.918343    3508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 06:08:39.095928    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 06:08:39.095928    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:41.355169    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:41.355887    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:41.355887    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:44.097049    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:44.097049    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:44.104797    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:44.105648    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:44.105648    3508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 06:08:44.260283    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 06:08:44.260283    3508 machine.go:97] duration metric: took 47.9338031s to provisionDockerMachine
	I0520 06:08:44.260283    3508 start.go:293] postStartSetup for "pause-325200" (driver="hyperv")
	I0520 06:08:44.260283    3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 06:08:44.275055    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 06:08:44.275055    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:46.548788    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:46.549182    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:46.549182    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:49.241397    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:49.241556    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:49.241819    3508 sshutil.go:53] new ssh client: &{IP:172.25.241.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-325200\id_rsa Username:docker}
	I0520 06:08:49.358641    3508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0834765s)
	I0520 06:08:49.373920    3508 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 06:08:49.382778    3508 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 06:08:49.382778    3508 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0520 06:08:49.382778    3508 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0520 06:08:49.384647    3508 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem -> 41002.pem in /etc/ssl/certs
	I0520 06:08:49.398858    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 06:08:49.419913    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /etc/ssl/certs/41002.pem (1708 bytes)
	I0520 06:08:49.469997    3508 start.go:296] duration metric: took 5.2097006s for postStartSetup
	I0520 06:08:49.470149    3508 fix.go:56] duration metric: took 55.7504686s for fixHost
	I0520 06:08:49.470233    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:51.691086    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:51.691442    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:51.691442    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:54.370956    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:54.371242    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:54.378968    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:54.379697    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:54.379697    3508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 06:08:54.521022    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716210534.525675006
	
	I0520 06:08:54.521022    3508 fix.go:216] guest clock: 1716210534.525675006
	I0520 06:08:54.521022    3508 fix.go:229] Guest: 2024-05-20 06:08:54.525675006 -0700 PDT Remote: 2024-05-20 06:08:49.470233 -0700 PDT m=+229.190397401 (delta=5.055442006s)
	I0520 06:08:54.521546    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:59.622060    3476 start.go:364] duration metric: took 53.5265878s to acquireMachinesLock for "force-systemd-env-419600"
	I0520 06:08:59.622060    3476 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-419600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-419600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 06:08:59.622596    3476 start.go:125] createHost starting for "" (driver="hyperv")
	I0520 06:08:56.756696    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:08:56.756782    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:56.756782    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:08:59.448090    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:08:59.448090    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:08:59.457664    3508 main.go:141] libmachine: Using SSH client type: native
	I0520 06:08:59.457664    3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x119a4a0] 0x119d080 <nil>  [] 0s} 172.25.241.37 22 <nil> <nil>}
	I0520 06:08:59.458208    3508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716210534
	I0520 06:08:59.621493    3508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 20 13:08:54 UTC 2024
	
	I0520 06:08:59.621493    3508 fix.go:236] clock set: Mon May 20 13:08:54 UTC 2024
	 (err=<nil>)
	I0520 06:08:59.621493    3508 start.go:83] releasing machines lock for "pause-325200", held for 1m5.9019665s
	I0520 06:08:59.622060    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:08:59.626720    3476 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 06:08:59.627589    3476 start.go:159] libmachine.API.Create for "force-systemd-env-419600" (driver="hyperv")
	I0520 06:08:59.627766    3476 client.go:168] LocalClient.Create starting
	I0520 06:08:59.628767    3476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0520 06:08:59.629074    3476 main.go:141] libmachine: Decoding PEM data...
	I0520 06:08:59.629074    3476 main.go:141] libmachine: Parsing certificate...
	I0520 06:08:59.629629    3476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0520 06:08:59.629786    3476 main.go:141] libmachine: Decoding PEM data...
	I0520 06:08:59.629786    3476 main.go:141] libmachine: Parsing certificate...
	I0520 06:08:59.629786    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0520 06:09:01.730572    3476 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0520 06:09:01.730572    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:01.730819    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0520 06:09:03.632349    3476 main.go:141] libmachine: [stdout =====>] : False
	
	I0520 06:09:03.632349    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:03.632583    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 06:09:01.975569    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:09:01.975652    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:01.975652    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:09:04.782646    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:09:04.782646    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:04.787083    3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 06:09:04.787293    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:09:04.800509    3508 ssh_runner.go:195] Run: cat /version.json
	I0520 06:09:04.800509    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-325200 ).state
	I0520 06:09:05.274180    3476 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 06:09:05.274180    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:05.274359    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 06:09:07.213038    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:09:07.213437    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:07.213437    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:09:07.213607    3508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0520 06:09:07.213607    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:07.213607    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-325200 ).networkadapters[0]).ipaddresses[0]
	I0520 06:09:10.120050    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:09:10.120050    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:10.120994    3508 sshutil.go:53] new ssh client: &{IP:172.25.241.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-325200\id_rsa Username:docker}
	I0520 06:09:10.175753    3508 main.go:141] libmachine: [stdout =====>] : 172.25.241.37
	
	I0520 06:09:10.176238    3508 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:10.176471    3508 sshutil.go:53] new ssh client: &{IP:172.25.241.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-325200\id_rsa Username:docker}
	I0520 06:09:10.213980    3508 ssh_runner.go:235] Completed: cat /version.json: (5.4134558s)
	W0520 06:09:10.213980    3508 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 06:09:10.229128    3508 ssh_runner.go:195] Run: systemctl --version
	I0520 06:09:09.598096    3476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 06:09:09.598096    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:09.600534    3476 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 06:09:10.084015    3476 main.go:141] libmachine: Creating SSH key...
	I0520 06:09:10.352596    3476 main.go:141] libmachine: Creating VM...
	I0520 06:09:10.352596    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0520 06:09:13.645230    3476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0520 06:09:13.645309    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:13.645382    3476 main.go:141] libmachine: Using switch "Default Switch"
	I0520 06:09:13.645554    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0520 06:09:12.240628    3508 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.4534367s)
	W0520 06:09:12.240733    3508 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	I0520 06:09:12.240791    3508 ssh_runner.go:235] Completed: systemctl --version: (2.0114943s)
	W0520 06:09:12.240894    3508 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0520 06:09:12.240929    3508 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0520 06:09:12.262530    3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 06:09:12.271770    3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 06:09:12.286231    3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 06:09:12.307204    3508 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 06:09:12.307204    3508 start.go:494] detecting cgroup driver to use...
	I0520 06:09:12.307204    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:09:12.358314    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 06:09:12.395732    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 06:09:12.421621    3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 06:09:12.435756    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 06:09:12.477340    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:09:12.514013    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 06:09:12.549772    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 06:09:12.584538    3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 06:09:12.621697    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 06:09:12.659003    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 06:09:12.693307    3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 06:09:12.732462    3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 06:09:12.768363    3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 06:09:12.803674    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:09:13.087363    3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 06:09:13.127834    3508 start.go:494] detecting cgroup driver to use...
	I0520 06:09:13.143096    3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 06:09:13.183411    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:09:13.222404    3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 06:09:13.382979    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 06:09:13.425279    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 06:09:13.451413    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 06:09:13.503439    3508 ssh_runner.go:195] Run: which cri-dockerd
	I0520 06:09:13.523449    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 06:09:13.543670    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 06:09:13.592706    3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 06:09:13.877748    3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 06:09:14.158629    3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 06:09:14.158629    3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 06:09:14.208628    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:09:14.496447    3508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 06:09:15.541347    3476 main.go:141] libmachine: [stdout =====>] : True
	
	I0520 06:09:15.541347    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:15.541347    3476 main.go:141] libmachine: Creating VHD
	I0520 06:09:15.541347    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0520 06:09:21.404284    7224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4469699s)
	I0520 06:09:21.419140    7224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0520 06:09:21.510998    7224 out.go:177] 
	W0520 06:09:21.513447    7224 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 20 13:00:59 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.037443074Z" level=info msg="Starting up"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.038319876Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:00.039637079Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.080460865Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110587729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110768930Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110866230Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.110885330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113200235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113305435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113571636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113678936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113702736Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.113716136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114298537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.114962239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118680547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.118813447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119026047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119120547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119814849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119853049Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.119869249Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122547655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122663855Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122691455Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122763655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122786055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.122863655Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123307456Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123604557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123777457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123801757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123834957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123866058Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123885758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123903058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123919358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123934958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123948658Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.123973158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124002458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124018458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124051258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124093058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124128358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124144058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124157758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124172358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124187058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124221458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124238458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124252458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124272558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124307558Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124365059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124385659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124400159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124509659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124855060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124954260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.124976560Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125051060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125093360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125108260Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125636761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.125886262Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126052562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:00 kubernetes-upgrade-771300 dockerd[671]: time="2024-05-20T13:01:00.126155962Z" level=info msg="containerd successfully booted in 0.049423s"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.094250311Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.243002207Z" level=info msg="Loading containers: start."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.691460709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.773211320Z" level=info msg="Loading containers: done."
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800062926Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.800884720Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:01 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858529005Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:01 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:01.858735403Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.828686794Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:29 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830122681Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830492278Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830752675Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:29 kubernetes-upgrade-771300 dockerd[664]: time="2024-05-20T13:01:29.830763675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:30 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.922757229Z" level=info msg="Starting up"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.924099117Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:30.925255106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1140
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.959361892Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986473143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986577442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986621042Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986637041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986661041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986673441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.986904939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987092937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987151337Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987164237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987191236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.987342235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990420507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990514306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990659804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990693504Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990720604Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990737504Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.990748904Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991134000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991247399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991270199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991286199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991300698Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991348298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991628495Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991774594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991871793Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991893493Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991907193Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991921193Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.991938293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992008292Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992025792Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992085791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992104491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992116191Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992135791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992151591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992164491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992177490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992190390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992203790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992216290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992229090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992249690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992268890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992281689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992294389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992306689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992325889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992347389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992366589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992378589Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992496987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992573887Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992588687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992599587Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992678386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992699586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.992714085Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993197881Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993278580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993323180Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:30 kubernetes-upgrade-771300 dockerd[1140]: time="2024-05-20T13:01:30.993360380Z" level=info msg="containerd successfully booted in 0.035865s"
	May 20 13:01:31 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:31.978697315Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.009201834Z" level=info msg="Loading containers: start."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.370455711Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.458096904Z" level=info msg="Loading containers: done."
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482420880Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.482579679Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527803163Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:32 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:32.527893462Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:32 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:45 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.842855467Z" level=info msg="Processing signal 'terminated'"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.844484152Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846105637Z" level=info msg="Daemon shutdown complete"
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846235336Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:01:45 kubernetes-upgrade-771300 dockerd[1134]: time="2024-05-20T13:01:45.846306935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:01:46 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.927287891Z" level=info msg="Starting up"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.928556979Z" level=info msg="containerd not running, starting managed containerd"
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:46.932662041Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1552
	May 20 13:01:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:46.973318967Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.004916376Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005115875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005190374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005209174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005241973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005256673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005627270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005729169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005750369Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005761769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005789968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.005940767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009212237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009311836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009455835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009597533Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009638133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009660533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009672133Z" level=info msg="metadata content store policy set" policy=shared
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009829431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.009994130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010016629Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010037029Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010100629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010152028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010426026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010636224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010728123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010747823Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010761523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010774122Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010791222Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010806122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010823922Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010836822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010849322Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010860422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010898421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010935021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010965721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.010979621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011010720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011035920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011086920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011101519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011117219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011133119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011145219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011158819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011176419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011193919Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011215518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011230618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011242518Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011407717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011453116Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011466316Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011478916Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011738814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011823713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.011855013Z" level=info msg="NRI interface is disabled by configuration."
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012273509Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012414407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012484007Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:47.012504107Z" level=info msg="containerd successfully booted in 0.042391s"
	May 20 13:01:47 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:47.981876489Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.476586837Z" level=info msg="Loading containers: start."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.807559492Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.905307593Z" level=info msg="Loading containers: done."
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931221455Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.931373653Z" level=info msg="Daemon has completed initialization"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978281022Z" level=info msg="API listen on /var/run/docker.sock"
	May 20 13:01:48 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:01:48.978438320Z" level=info msg="API listen on [::]:2376"
	May 20 13:01:48 kubernetes-upgrade-771300 systemd[1]: Started Docker Application Container Engine.
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.130773795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133253132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133393534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.133691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178014301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178177603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178210804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.178417807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.230581685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231234395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231354497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.231550800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250397781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250502383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.250544683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.254253139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709704838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709805940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709832240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.709950942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821455107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.821982715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822128117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.822516823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847471995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.847934702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848103805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.848306708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.892798372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893151577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.893329380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:01:55 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:01:55.894125192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774904743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.774982243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775000443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.775222145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820259418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820446919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820640521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.820917323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882192130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882425332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882512233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:00 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:00.882741135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514251213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514495615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.514689617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.515177520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682587451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682743552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682763852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:01.682947454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010186431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010291745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010307348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:02.010569583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:13.912132740Z" level=info msg="ignoring event" container=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914336490Z" level=info msg="shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914712433Z" level=warning msg="cleaning up after shim disconnected" id=0db62b62a27e1f5e74d221105bfa7a56301acd4c524c7698f00cf587e360ce44 namespace=moby
	May 20 13:02:13 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:13.914778040Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216865467Z" level=info msg="shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216950077Z" level=warning msg="cleaning up after shim disconnected" id=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.216967979Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:14.218894194Z" level=info msg="ignoring event" container=0cd24e8823c0d99c26bbc8e14091869c70c79e1ef030ff98df807a59b4999d52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.301813261Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796412739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796820785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.796885692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:14 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:14.797244032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309100786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309417221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309522833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.309832967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607638677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607740188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.607762091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.609687203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.704693802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705133551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705185357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:15.705558298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.101434997Z" level=info msg="shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.101692603Z" level=info msg="ignoring event" container=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103346538Z" level=warning msg="cleaning up after shim disconnected" id=f8f04420a3569480426abfeaadb9dee79e896455b9019aea4246bb2d99edd7f1 namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.103515142Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:19.281857359Z" level=info msg="ignoring event" container=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282839880Z" level=info msg="shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.282987083Z" level=warning msg="cleaning up after shim disconnected" id=6a8d4844731ddcfbefa92b758e5efd37f69f4ebef2884ae1e9a14845230a726b namespace=moby
	May 20 13:02:19 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:19.283004584Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:02:31.876849124Z" level=info msg="ignoring event" container=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.876684723Z" level=info msg="shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.877874330Z" level=warning msg="cleaning up after shim disconnected" id=140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49 namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.878129431Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:02:31 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:31.905305679Z" level=warning msg="cleanup warnings time=\"2024-05-20T13:02:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348168231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348227132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348239532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:02:46 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:02:46.348380632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.856310828Z" level=info msg="shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:04:11.857238831Z" level=info msg="ignoring event" container=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858242934Z" level=warning msg="cleaning up after shim disconnected" id=cadb4b4bf4727be6ab1cbdf212573de7e3a4bd9347f02e6da234bc8e532359b4 namespace=moby
	May 20 13:04:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:11.858434834Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538235984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538449684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538488584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:04:15 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:04:15.538656385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:06:11.870709878Z" level=info msg="ignoring event" container=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874226381Z" level=info msg="shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874612781Z" level=warning msg="cleaning up after shim disconnected" id=49ffb258d29eee1f24a89ae37e25dfa85bb62adbf32870f9513e4bb47580fc2a namespace=moby
	May 20 13:06:11 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:11.874682181Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.145912696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146016996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146467497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:06:12 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:06:12.146923397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:01.931343263Z" level=info msg="ignoring event" container=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.934941035Z" level=info msg="shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935393532Z" level=warning msg="cleaning up after shim disconnected" id=763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718 namespace=moby
	May 20 13:08:01 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:01.935498231Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252177659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252290558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.252734354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:02 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:02.253607748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 13:08:09 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:09.973002592Z" level=info msg="Processing signal 'terminated'"
	May 20 13:08:09 kubernetes-upgrade-771300 systemd[1]: Stopping Docker Application Container Engine...
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.230142085Z" level=info msg="ignoring event" container=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.232967063Z" level=info msg="shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233137461Z" level=warning msg="cleaning up after shim disconnected" id=a61303294090cd66213f69c5e01ec1a568537e45cdcc56f1d7b9f308fa86f6bd namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.233189961Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247023453Z" level=info msg="shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247144852Z" level=warning msg="cleaning up after shim disconnected" id=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.247157752Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.247372450Z" level=info msg="ignoring event" container=f7c20c1282eb1a1013064e08559000088b9b81e0a8e34f2d9ddf4ba6cf3c5404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.250825123Z" level=info msg="ignoring event" container=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251279720Z" level=info msg="shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251393319Z" level=warning msg="cleaning up after shim disconnected" id=67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.251506218Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.274547738Z" level=info msg="ignoring event" container=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275660229Z" level=info msg="shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275766329Z" level=warning msg="cleaning up after shim disconnected" id=354484ce3c3907c57f7c1dd57e705e70f8058a24a1397bfc98e4b6a6ad011db8 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.275945927Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.299742941Z" level=info msg="shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302401521Z" level=info msg="ignoring event" container=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.302824617Z" level=info msg="ignoring event" container=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.319605986Z" level=warning msg="cleaning up after shim disconnected" id=ab4bb9947249b1f6bab0f97239a82f35c07559cf078fda1136bc4addd5ae6cbe namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.327187827Z" level=info msg="ignoring event" container=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328461417Z" level=info msg="ignoring event" container=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.328775015Z" level=info msg="ignoring event" container=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.331314295Z" level=info msg="ignoring event" container=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.336717753Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.340174826Z" level=info msg="ignoring event" container=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.320113582Z" level=info msg="shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340568023Z" level=warning msg="cleaning up after shim disconnected" id=cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.340992420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:10.350136848Z" level=info msg="ignoring event" container=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325335642Z" level=info msg="shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.351143140Z" level=info msg="shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354832711Z" level=warning msg="cleaning up after shim disconnected" id=8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.354848411Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.364943533Z" level=warning msg="cleaning up after shim disconnected" id=2a60a59e9a4eecf51706165f15a5d914b7b28857252560eb8a98fbff7f3e7912 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.367236815Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331707292Z" level=info msg="shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376494842Z" level=warning msg="cleaning up after shim disconnected" id=74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.376813840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.331643992Z" level=info msg="shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325355442Z" level=info msg="shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.325794638Z" level=info msg="shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382439096Z" level=warning msg="cleaning up after shim disconnected" id=642480f208d14add76ecbc07005ec355557659e187fb0efa2a968d7967a846ec namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.382490296Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387660755Z" level=warning msg="cleaning up after shim disconnected" id=3f08d8fd18cfc145c7d6baac3158f98d812d34d2093d11f381a94aef63e2df7f namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.387825654Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395710992Z" level=warning msg="cleaning up after shim disconnected" id=934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1 namespace=moby
	May 20 13:08:10 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:10.395811492Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.095287980Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.118381400Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.153403626Z" level=info msg="ignoring event" container=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.156847699Z" level=info msg="shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.157645893Z" level=warning msg="cleaning up after shim disconnected" id=941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301 namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.158003790Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.180419115Z" level=info msg="ignoring event" container=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183090995Z" level=info msg="shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.183780289Z" level=warning msg="cleaning up after shim disconnected" id=61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1552]: time="2024-05-20T13:08:20.184228486Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.270849910Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271569504Z" level=info msg="Daemon shutdown complete"
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271724103Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 20 13:08:20 kubernetes-upgrade-771300 dockerd[1546]: time="2024-05-20T13:08:20.271760402Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Deactivated successfully.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Consumed 12.678s CPU time.
	May 20 13:08:21 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:08:21 kubernetes-upgrade-771300 dockerd[5574]: time="2024-05-20T13:08:21.358571201Z" level=info msg="Starting up"
	May 20 13:09:21 kubernetes-upgrade-771300 dockerd[5574]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 13:09:21 kubernetes-upgrade-771300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0520 06:09:21.513447    7224 out.go:239] * 
	W0520 06:09:21.517005    7224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 06:09:21.520477    7224 out.go:177] 
	I0520 06:09:19.529114    3476 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600\f
	                          ixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 27348A83-3BD3-4D7A-8CC8-BDB093EA92F3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0520 06:09:19.529114    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:19.529114    3476 main.go:141] libmachine: Writing magic tar header
	I0520 06:09:19.529114    3476 main.go:141] libmachine: Writing SSH key tar header
	I0520 06:09:19.540314    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0520 06:09:22.899462    3476 main.go:141] libmachine: [stdout =====>] : 
	I0520 06:09:22.899527    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:22.899527    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600\disk.vhd' -SizeBytes 20000MB
	I0520 06:09:27.445122    3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9486402s)
	I0520 06:09:27.467984    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 06:09:27.533133    3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 06:09:27.597741    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 06:09:27.642141    3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 06:09:27.903206    3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 06:09:28.141302    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:09:28.390407    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 06:09:28.435466    3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 06:09:28.483001    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:09:28.782521    3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 06:09:28.971250    3508 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 06:09:28.985272    3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 06:09:28.999515    3508 start.go:562] Will wait 60s for crictl version
	I0520 06:09:29.013691    3508 ssh_runner.go:195] Run: which crictl
	I0520 06:09:29.035327    3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 06:09:29.098941    3508 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0520 06:09:29.108070    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 06:09:29.155028    3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 06:09:25.659054    3476 main.go:141] libmachine: [stdout =====>] : 
	I0520 06:09:25.659938    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:25.660115    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-env-419600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-env-419600' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0520 06:09:29.196302    3508 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0520 06:09:29.196302    3508 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0520 06:09:29.201376    3508 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0520 06:09:29.201376    3508 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0520 06:09:29.201376    3508 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0520 06:09:29.201376    3508 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:fa:c9:4a Flags:up|broadcast|multicast|running}
	I0520 06:09:29.204323    3508 ip.go:210] interface addr: fe80::2696:9407:4ec3:83c8/64
	I0520 06:09:29.205309    3508 ip.go:210] interface addr: 172.25.240.1/20
	I0520 06:09:29.219320    3508 ssh_runner.go:195] Run: grep 172.25.240.1	host.minikube.internal$ /etc/hosts
	I0520 06:09:29.229263    3508 kubeadm.go:877] updating cluster {Name:pause-325200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:pause-325200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.241.37 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin
:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 06:09:29.229263    3508 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 06:09:29.240446    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 06:09:29.267656    3508 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 06:09:29.267656    3508 docker.go:615] Images already preloaded, skipping extraction
	I0520 06:09:29.278646    3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 06:09:29.305556    3508 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 06:09:29.305556    3508 cache_images.go:84] Images are preloaded, skipping loading
	I0520 06:09:29.305556    3508 kubeadm.go:928] updating node { 172.25.241.37 8443 v1.30.1 docker true true} ...
	I0520 06:09:29.305556    3508 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-325200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.241.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-325200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 06:09:29.316557    3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 06:09:29.357957    3508 cni.go:84] Creating CNI manager for ""
	I0520 06:09:29.357957    3508 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 06:09:29.358115    3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 06:09:29.358115    3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.241.37 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-325200 NodeName:pause-325200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.241.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.241.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 06:09:29.358115    3508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.241.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-325200"
	  kubeletExtraArgs:
	    node-ip: 172.25.241.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.241.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 06:09:29.373192    3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 06:09:29.392545    3508 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 06:09:29.406353    3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 06:09:29.425299    3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 06:09:29.459904    3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 06:09:29.498086    3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0520 06:09:29.551401    3508 ssh_runner.go:195] Run: grep 172.25.241.37	control-plane.minikube.internal$ /etc/hosts
	I0520 06:09:29.573950    3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 06:09:29.844665    3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 06:09:29.896772    3508 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200 for IP: 172.25.241.37
	I0520 06:09:29.896772    3508 certs.go:194] generating shared ca certs ...
	I0520 06:09:29.896772    3508 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 06:09:29.897769    3508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0520 06:09:29.898780    3508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0520 06:09:29.898780    3508 certs.go:256] generating profile certs ...
	I0520 06:09:29.898780    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\client.key
	I0520 06:09:29.899770    3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\apiserver.key.73d76a6b
	I0520 06:09:29.899770    3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\proxy-client.key
	I0520 06:09:29.901783    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem (1338 bytes)
	W0520 06:09:29.902778    3508 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100_empty.pem, impossibly tiny 0 bytes
	I0520 06:09:29.902778    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0520 06:09:29.902778    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0520 06:09:29.902778    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0520 06:09:29.902778    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0520 06:09:29.903771    3508 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem (1708 bytes)
	I0520 06:09:29.904772    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 06:09:30.013374    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 06:09:30.086327    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 06:09:30.152821    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 06:09:30.234282    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 06:09:30.330104    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 06:09:29.759643    3476 main.go:141] libmachine: [stdout =====>] : 
	Name                     State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                     ----- ----------- ----------------- ------   ------             -------
	force-systemd-env-419600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0520 06:09:29.759643    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:29.759643    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-env-419600 -DynamicMemoryEnabled $false
	I0520 06:09:32.428639    3476 main.go:141] libmachine: [stdout =====>] : 
	I0520 06:09:32.428639    3476 main.go:141] libmachine: [stderr =====>] : 
	I0520 06:09:32.429735    3476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-env-419600 -Count 2
	I0520 06:09:30.428515    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 06:09:30.523995    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-325200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 06:09:30.597768    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\4100.pem --> /usr/share/ca-certificates/4100.pem (1338 bytes)
	I0520 06:09:30.653489    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\41002.pem --> /usr/share/ca-certificates/41002.pem (1708 bytes)
	I0520 06:09:30.732716    3508 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 06:09:30.810419    3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 06:09:30.885921    3508 ssh_runner.go:195] Run: openssl version
	I0520 06:09:30.920567    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4100.pem && ln -fs /usr/share/ca-certificates/4100.pem /etc/ssl/certs/4100.pem"
	I0520 06:09:30.990696    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4100.pem
	I0520 06:09:30.999722    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:39 /usr/share/ca-certificates/4100.pem
	I0520 06:09:31.013689    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4100.pem
	I0520 06:09:31.043777    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4100.pem /etc/ssl/certs/51391683.0"
	I0520 06:09:31.108052    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41002.pem && ln -fs /usr/share/ca-certificates/41002.pem /etc/ssl/certs/41002.pem"
	I0520 06:09:31.162369    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41002.pem
	I0520 06:09:31.173952    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:39 /usr/share/ca-certificates/41002.pem
	I0520 06:09:31.188525    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41002.pem
	I0520 06:09:31.224836    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41002.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 06:09:31.289519    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 06:09:31.332092    3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 06:09:31.350427    3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0520 06:09:31.365488    3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 06:09:31.392503    3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 06:09:31.428339    3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 06:09:31.450665    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 06:09:31.481213    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 06:09:31.514518    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 06:09:31.546638    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 06:09:31.576929    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 06:09:31.605887    3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 06:09:31.619382    3508 kubeadm.go:391] StartCluster: {Name:pause-325200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-325200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.241.37 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fa
lse olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 06:09:31.630795    3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 06:09:31.694710    3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 06:09:31.731688    3508 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 06:09:31.731752    3508 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 06:09:31.731752    3508 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 06:09:31.746674    3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 06:09:31.773311    3508 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 06:09:31.775890    3508 kubeconfig.go:125] found "pause-325200" server: "https://172.25.241.37:8443"
	I0520 06:09:31.780643    3508 kapi.go:59] client config for pause-325200: &rest.Config{Host:"https://172.25.241.37:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-325200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-325200\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x263a180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 06:09:31.795024    3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 06:09:31.815030    3508 kubeadm.go:624] The running cluster does not require reconfiguration: 172.25.241.37
	I0520 06:09:31.815030    3508 kubeadm.go:1154] stopping kube-system containers ...
	I0520 06:09:31.826060    3508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 06:09:31.888117    3508 docker.go:483] Stopping containers: [4190bdfa88bd 7b288357c086 eda21b514df5 ff2347d2299b 57aa02e60a13 f53c2c8116ff efb710984277 eacd35c71ac4 b5f407b45d41 a48426826983 c901c34a1b14 5d0f46d4db39 b3ffa5cd23a7 12a8cfc066e2 e1b67231c4db 869642fc049e 37d7ef79087f 2850432270bd 03dd1ed77bfd 233268824d56 f42617ef61cb 641cf43593a4 9e6654efa655 5c05e0025f45 b9209cfee7d0 391012c71556 5efd5fb4b5a2 69eadabece5a 2380abe3591a]
	I0520 06:09:31.902953    3508 ssh_runner.go:195] Run: docker stop 4190bdfa88bd 7b288357c086 eda21b514df5 ff2347d2299b 57aa02e60a13 f53c2c8116ff efb710984277 eacd35c71ac4 b5f407b45d41 a48426826983 c901c34a1b14 5d0f46d4db39 b3ffa5cd23a7 12a8cfc066e2 e1b67231c4db 869642fc049e 37d7ef79087f 2850432270bd 03dd1ed77bfd 233268824d56 f42617ef61cb 641cf43593a4 9e6654efa655 5c05e0025f45 b9209cfee7d0 391012c71556 5efd5fb4b5a2 69eadabece5a 2380abe3591a
	I0520 06:09:34.174734    3508 ssh_runner.go:235] Completed: docker stop 4190bdfa88bd 7b288357c086 eda21b514df5 ff2347d2299b 57aa02e60a13 f53c2c8116ff efb710984277 eacd35c71ac4 b5f407b45d41 a48426826983 c901c34a1b14 5d0f46d4db39 b3ffa5cd23a7 12a8cfc066e2 e1b67231c4db 869642fc049e 37d7ef79087f 2850432270bd 03dd1ed77bfd 233268824d56 f42617ef61cb 641cf43593a4 9e6654efa655 5c05e0025f45 b9209cfee7d0 391012c71556 5efd5fb4b5a2 69eadabece5a 2380abe3591a: (2.271775s)
	I0520 06:09:34.190731    3508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 06:09:34.269470    3508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 06:09:34.294229    3508 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May 20 13:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 20 13:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 20 13:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 May 20 13:04 /etc/kubernetes/scheduler.conf
	
	I0520 06:09:34.309496    3508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 06:09:34.350414    3508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 06:09:34.388046    3508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 06:09:34.408720    3508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 06:09:34.421346    3508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 06:09:34.460367    3508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 06:09:34.488059    3508 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 06:09:34.503537    3508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 06:09:34.543989    3508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 06:09:34.577094    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 06:09:34.714544    3508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> Docker <==
	May 20 13:11:21 kubernetes-upgrade-771300 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 20 13:11:21 kubernetes-upgrade-771300 systemd[1]: Failed to start Docker Application Container Engine.
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '140b6170517849028a2a58b3df03bc22687e54a5f42f998c5f872ac4a8bbde49'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '934dcd0c933344ecdd4733609bc755ddad9563f20f208f0e6a0d4b4b5b5fceb1'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '67b23ab502c3bca05d926246e18f484ef934a75166c39178e2cb2cf621f22d78'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '941f70f2d858c3e004afdc934ecb45dbce2804cda5a9e01cc0943f3b856ba301'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID 'cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cda0c38d2a9425c6aefbeede2543a6fd9ad88d6b1c7ac28e128d28374e959311'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '74fa8e3295c178853e650cb395ad5a21e5e7012d38e8a4008fe7d15f2a70fe4b'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8fd7ac69a11ed42f3a5f7e1d41d0372c6864581219927ce93ddd1a618bb012d5'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '763b8d0f762af80e68a0ac0a0bff3659c0db7b368596cfa2142ea585748f7718'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error getting RW layer size for container ID '61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="Set backoffDuration to : 1m0s for container ID '61fa58aa7a58da681e9d9b283458267ffe17a778dfb3027edefdbc895c66973e'"
	May 20 13:11:21 kubernetes-upgrade-771300 cri-dockerd[1356]: time="2024-05-20T13:11:21Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	May 20 13:11:22 kubernetes-upgrade-771300 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	May 20 13:11:22 kubernetes-upgrade-771300 systemd[1]: Stopped Docker Application Container Engine.
	May 20 13:11:22 kubernetes-upgrade-771300 systemd[1]: Starting Docker Application Container Engine...
	May 20 13:11:22 kubernetes-upgrade-771300 dockerd[6276]: time="2024-05-20T13:11:22.193126962Z" level=info msg="Starting up"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T13:11:24Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.075868] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[May20 13:01] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.129538] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.626248] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +0.231146] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.240486] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.035960] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.227172] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.216731] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.315040] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +0.111595] kauditd_printk_skb: 183 callbacks suppressed
	[ +12.105264] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +0.117885] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.571632] systemd-fstab-generator[1770]: Ignoring "noauto" option for root device
	[  +4.337267] systemd-fstab-generator[1909]: Ignoring "noauto" option for root device
	[  +0.126294] kauditd_printk_skb: 73 callbacks suppressed
	[May20 13:02] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.809683] systemd-fstab-generator[2741]: Ignoring "noauto" option for root device
	[ +11.161529] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.204792] kauditd_printk_skb: 38 callbacks suppressed
	[May20 13:08] systemd-fstab-generator[5086]: Ignoring "noauto" option for root device
	[  +0.771627] systemd-fstab-generator[5123]: Ignoring "noauto" option for root device
	[  +0.318915] systemd-fstab-generator[5135]: Ignoring "noauto" option for root device
	[  +0.332697] systemd-fstab-generator[5149]: Ignoring "noauto" option for root device
	[ +10.417177] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 13:12:22 up 12 min,  0 users,  load average: 0.05, 0.16, 0.13
	Linux kubernetes-upgrade-771300 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 20 13:12:13 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:13.143035    1937 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-771300\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-771300?timeout=10s\": dial tcp 172.25.246.1:8443: connect: connection refused"
	May 20 13:12:13 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:13.143153    1937 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 20 13:12:14 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:14.116802    1937 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-upgrade-771300.17d13459276e9dc2\": dial tcp 172.25.246.1:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-771300.17d13459276e9dc2  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-771300,UID:255ed9a290ef80d374542874262c3913,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.25.246.1:8443/readyz\": dial tcp 172.25.246.1:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-771300,},FirstTimestamp:2024-05-20 13:08:10.523164098 +0
000 UTC m=+376.667489019,LastTimestamp:2024-05-20 13:08:12.522998587 +0000 UTC m=+378.667323508,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-771300,}"
	May 20 13:12:14 kubernetes-upgrade-771300 kubelet[1937]: I0520 13:12:14.156034    1937 status_manager.go:853] "Failed to get status for pod" podUID="255ed9a290ef80d374542874262c3913" pod="kube-system/kube-apiserver-kubernetes-upgrade-771300" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-771300\": dial tcp 172.25.246.1:8443: connect: connection refused"
	May 20 13:12:16 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:16.419669    1937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-771300?timeout=10s\": dial tcp 172.25.246.1:8443: connect: connection refused" interval="7s"
	May 20 13:12:17 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:17.747821    1937 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.472796643s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.220636    1937 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.222530    1937 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.223152    1937 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.223976    1937 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.230438    1937 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.230529    1937 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: I0520 13:12:22.230722    1937 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.231469    1937 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.231726    1937 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.232388    1937 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.233966    1937 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.234318    1937 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.235243    1937 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.233537    1937 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.235550    1937 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: I0520 13:12:22.235786    1937 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.237677    1937 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.237818    1937 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 20 13:12:22 kubernetes-upgrade-771300 kubelet[1937]: E0520 13:12:22.238175    1937 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:09:35.626847    4124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0520 06:10:21.671301    4124 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:10:21.715333    4124 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:10:21.747345    4124 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:10:21.787955    4124 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:10:21.822041    4124 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:11:21.988736    4124 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:11:22.023778    4124 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0520 06:11:22.057635    4124 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-771300 -n kubernetes-upgrade-771300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-771300 -n kubernetes-upgrade-771300: exit status 2 (15.5185429s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:12:23.140583    4684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-771300" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-771300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-771300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-771300: (1m9.1323022s)
--- FAIL: TestKubernetesUpgrade (1585.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-509600 --driver=hyperv
E0520 05:48:04.579103    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 05:50:25.062174    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-509600 --driver=hyperv: exit status 1 (4m59.7529243s)

                                                
                                                
-- stdout --
	* [NoKubernetes-509600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-509600" primary control-plane node in "NoKubernetes-509600" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:47:22.155650    6496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-509600 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-509600 -n NoKubernetes-509600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-509600 -n NoKubernetes-509600: exit status 7 (174.1883ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:52:21.878632   14076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-509600" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (8.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list --output json: exit status 1 (8.0080435s)

                                                
                                                
** stderr ** 
	W0520 06:11:32.909327   14340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
pause_test.go:144: failed to list profiles with json format after it was deleted. args "out/minikube-windows-amd64.exe profile list --output json": exit status 1
pause_test.go:149: failed to decode json from profile list: args "out/minikube-windows-amd64.exe profile list --output json": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-325200 -n pause-325200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-325200 -n pause-325200: exit status 85 (181.7631ms)

                                                
                                                
-- stdout --
	* Profile "pause-325200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-325200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:11:40.934008    1356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-325200" host is not running, skipping log retrieval (state="* Profile \"pause-325200\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-325200\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-325200 -n pause-325200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-325200 -n pause-325200: exit status 85 (191.762ms)

                                                
                                                
-- stdout --
	* Profile "pause-325200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-325200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:11:41.132990    1176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-325200" host is not running, skipping log retrieval (state="* Profile \"pause-325200\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-325200\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (8.38s)

                                                
                                    

Test pass (152/205)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.31
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.25
9 TestDownloadOnly/v1.20.0/DeleteAll 1.38
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.28
12 TestDownloadOnly/v1.30.1/json-events 13.41
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.27
18 TestDownloadOnly/v1.30.1/DeleteAll 1.72
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.77
21 TestBinaryMirror 7.64
22 TestOffline 300.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 392.81
30 TestAddons/parallel/Ingress 68.25
31 TestAddons/parallel/InspektorGadget 27.4
32 TestAddons/parallel/MetricsServer 21.93
33 TestAddons/parallel/HelmTiller 30.58
35 TestAddons/parallel/CSI 96.55
36 TestAddons/parallel/Headlamp 37.86
37 TestAddons/parallel/CloudSpanner 20.77
38 TestAddons/parallel/LocalPath 97.13
39 TestAddons/parallel/NvidiaDevicePlugin 22.08
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.34
44 TestAddons/StoppedEnableDisable 55.51
47 TestDockerFlags 406.01
48 TestForceSystemdFlag 559.07
49 TestForceSystemdEnv 345.89
56 TestErrorSpam/start 17.75
57 TestErrorSpam/status 38.65
58 TestErrorSpam/pause 23.89
59 TestErrorSpam/unpause 24.07
60 TestErrorSpam/stop 57.22
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 248.54
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 130.07
67 TestFunctional/serial/KubeContext 0.14
68 TestFunctional/serial/KubectlGetPods 0.24
71 TestFunctional/serial/CacheCmd/cache/add_remote 27.07
72 TestFunctional/serial/CacheCmd/cache/add_local 11.26
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
74 TestFunctional/serial/CacheCmd/cache/list 0.19
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.81
76 TestFunctional/serial/CacheCmd/cache/cache_reload 37.46
77 TestFunctional/serial/CacheCmd/cache/delete 0.37
78 TestFunctional/serial/MinikubeKubectlCmd 0.45
80 TestFunctional/serial/ExtraConfig 131.75
81 TestFunctional/serial/ComponentHealth 0.19
82 TestFunctional/serial/LogsCmd 8.76
83 TestFunctional/serial/LogsFileCmd 10.97
84 TestFunctional/serial/InvalidService 21.47
90 TestFunctional/parallel/StatusCmd 43.34
94 TestFunctional/parallel/ServiceCmdConnect 38.85
95 TestFunctional/parallel/AddonsCmd 0.62
96 TestFunctional/parallel/PersistentVolumeClaim 46.41
98 TestFunctional/parallel/SSHCmd 20.45
99 TestFunctional/parallel/CpCmd 57.57
100 TestFunctional/parallel/MySQL 65.59
101 TestFunctional/parallel/FileSync 11.68
102 TestFunctional/parallel/CertSync 63.37
106 TestFunctional/parallel/NodeLabels 0.18
108 TestFunctional/parallel/NonActiveRuntimeDisabled 10.57
110 TestFunctional/parallel/License 3.42
111 TestFunctional/parallel/ServiceCmd/DeployApp 18.55
112 TestFunctional/parallel/ProfileCmd/profile_not_create 10.96
113 TestFunctional/parallel/ProfileCmd/profile_list 10.69
114 TestFunctional/parallel/ServiceCmd/List 13.65
115 TestFunctional/parallel/ProfileCmd/profile_json_output 11.13
116 TestFunctional/parallel/ServiceCmd/JSONOutput 14.97
118 TestFunctional/parallel/DockerEnv/powershell 45.28
120 TestFunctional/parallel/UpdateContextCmd/no_changes 2.79
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.87
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.94
124 TestFunctional/parallel/Version/short 0.2
125 TestFunctional/parallel/Version/components 8.33
126 TestFunctional/parallel/ImageCommands/ImageListShort 8.45
127 TestFunctional/parallel/ImageCommands/ImageListTable 8.23
128 TestFunctional/parallel/ImageCommands/ImageListJson 8.34
129 TestFunctional/parallel/ImageCommands/ImageListYaml 8.41
130 TestFunctional/parallel/ImageCommands/ImageBuild 28.67
131 TestFunctional/parallel/ImageCommands/Setup 5.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.23
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 25.56
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.56
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.36
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.79
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.94
147 TestFunctional/parallel/ImageCommands/ImageRemove 15.06
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.09
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.56
150 TestFunctional/delete_addon-resizer_images 0.44
151 TestFunctional/delete_my-image_image 0.2
152 TestFunctional/delete_minikube_cached_images 0.18
156 TestMultiControlPlane/serial/StartCluster 729.13
157 TestMultiControlPlane/serial/DeployApp 15.45
159 TestMultiControlPlane/serial/AddWorkerNode 266.42
160 TestMultiControlPlane/serial/NodeLabels 0.2
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 30.3
162 TestMultiControlPlane/serial/CopyFile 669.67
166 TestImageBuild/serial/Setup 204.36
167 TestImageBuild/serial/NormalBuild 9.88
168 TestImageBuild/serial/BuildWithBuildArg 9.36
169 TestImageBuild/serial/BuildWithDockerIgnore 8.21
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.94
174 TestJSONOutput/start/Command 218.16
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 8.24
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 8.03
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 35.62
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.37
202 TestMainNoArgs 0.23
203 TestMinikubeProfile 542.87
206 TestMountStart/serial/StartWithMountFirst 162.99
207 TestMountStart/serial/VerifyMountFirst 10.06
208 TestMountStart/serial/StartWithMountSecond 163
209 TestMountStart/serial/VerifyMountSecond 9.95
210 TestMountStart/serial/DeleteFirst 32.15
211 TestMountStart/serial/VerifyMountPostDelete 9.85
212 TestMountStart/serial/Stop 27.74
213 TestMountStart/serial/RestartStopped 124.12
214 TestMountStart/serial/VerifyMountPostStop 9.94
221 TestMultiNode/serial/MultiNodeLabels 0.18
222 TestMultiNode/serial/ProfileList 10.34
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.27
242 TestStoppedBinaryUpgrade/Setup 0.71
243 TestStoppedBinaryUpgrade/Upgrade 860.43
252 TestPause/serial/Start 499.46
253 TestPause/serial/SecondStartNoReconfiguration 300.84
254 TestStoppedBinaryUpgrade/MinikubeLogs 10.89
266 TestPause/serial/Pause 10.03
267 TestPause/serial/VerifyStatus 17.59
268 TestPause/serial/Unpause 8.7
269 TestPause/serial/PauseAgain 8.44
270 TestPause/serial/DeletePaused 46.95
x
+
TestDownloadOnly/v1.20.0/json-events (16.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-552800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-552800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.3083007s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-552800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-552800: exit status 85 (247.6985ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:20 PDT |          |
	|         | -p download-only-552800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:20:44
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:20:44.489624   10980 out.go:291] Setting OutFile to fd 540 ...
	I0520 03:20:44.490202   10980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:20:44.490202   10980 out.go:304] Setting ErrFile to fd 564...
	I0520 03:20:44.490202   10980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 03:20:44.505393   10980 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0520 03:20:44.516242   10980 out.go:298] Setting JSON to true
	I0520 03:20:44.521243   10980 start.go:129] hostinfo: {"hostname":"minikube1","uptime":441,"bootTime":1716200003,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:20:44.521243   10980 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:20:44.529522   10980 out.go:97] [download-only-552800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:20:44.530034   10980 notify.go:220] Checking for updates...
	I0520 03:20:44.532566   10980 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	W0520 03:20:44.530034   10980 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0520 03:20:44.538148   10980 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:20:44.540845   10980 out.go:169] MINIKUBE_LOCATION=18925
	I0520 03:20:44.543918   10980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0520 03:20:44.548880   10980 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 03:20:44.549840   10980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:20:50.275783   10980 out.go:97] Using the hyperv driver based on user configuration
	I0520 03:20:50.275783   10980 start.go:297] selected driver: hyperv
	I0520 03:20:50.275783   10980 start.go:901] validating driver "hyperv" against <nil>
	I0520 03:20:50.275783   10980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:20:50.336219   10980 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0520 03:20:50.337139   10980 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:20:50.337677   10980 cni.go:84] Creating CNI manager for ""
	I0520 03:20:50.337956   10980 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:20:50.337956   10980 start.go:340] cluster config:
	{Name:download-only-552800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-552800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:20:50.340515   10980 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:20:50.343480   10980 out.go:97] Downloading VM boot image ...
	I0520 03:20:50.343480   10980 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 03:20:53.785952   10980 out.go:97] Starting "download-only-552800" primary control-plane node in "download-only-552800" cluster
	I0520 03:20:53.786624   10980 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:20:53.825602   10980 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0520 03:20:53.825674   10980 cache.go:56] Caching tarball of preloaded images
	I0520 03:20:53.826404   10980 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:20:53.829470   10980 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 03:20:53.829470   10980 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0520 03:20:53.918833   10980 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0520 03:20:57.194427   10980 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0520 03:20:57.195457   10980 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0520 03:20:58.303516   10980 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:20:58.304489   10980 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-552800\config.json ...
	I0520 03:20:58.305351   10980 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-552800\config.json: {Name:mk0dc027c60a92598e4c240bee4cebb63c961a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:20:58.306128   10980 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:20:58.308148   10980 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-552800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-552800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:21:00.834800    8316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3838172s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-552800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-552800: (1.2773789s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-847500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-847500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (13.4062692s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-847500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-847500: exit status 85 (273.1465ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | -p download-only-552800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| delete  | -p download-only-552800        | download-only-552800 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT | 20 May 24 03:21 PDT |
	| start   | -o=json --download-only        | download-only-847500 | minikube1\jenkins | v1.33.1 | 20 May 24 03:21 PDT |                     |
	|         | -p download-only-847500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:21:03
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:21:03.720437    4584 out.go:291] Setting OutFile to fd 584 ...
	I0520 03:21:03.721123    4584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:03.721123    4584 out.go:304] Setting ErrFile to fd 544...
	I0520 03:21:03.721123    4584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:03.746936    4584 out.go:298] Setting JSON to true
	I0520 03:21:03.750886    4584 start.go:129] hostinfo: {"hostname":"minikube1","uptime":460,"bootTime":1716200003,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:21:03.751905    4584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:21:03.756720    4584 out.go:97] [download-only-847500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:21:03.757188    4584 notify.go:220] Checking for updates...
	I0520 03:21:03.760239    4584 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:21:03.762831    4584 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:21:03.768024    4584 out.go:169] MINIKUBE_LOCATION=18925
	I0520 03:21:03.770274    4584 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0520 03:21:03.775113    4584 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 03:21:03.775590    4584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:21:10.287478    4584 out.go:97] Using the hyperv driver based on user configuration
	I0520 03:21:10.287478    4584 start.go:297] selected driver: hyperv
	I0520 03:21:10.287478    4584 start.go:901] validating driver "hyperv" against <nil>
	I0520 03:21:10.288392    4584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:21:10.355891    4584 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0520 03:21:10.356904    4584 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:21:10.356904    4584 cni.go:84] Creating CNI manager for ""
	I0520 03:21:10.356904    4584 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:21:10.357894    4584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:21:10.357894    4584 start.go:340] cluster config:
	{Name:download-only-847500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-847500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:21:10.357894    4584 iso.go:125] acquiring lock: {Name:mk21c0043f839e55532eb150801eba8a2692e3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:21:10.361893    4584 out.go:97] Starting "download-only-847500" primary control-plane node in "download-only-847500" cluster
	I0520 03:21:10.361893    4584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:21:10.401087    4584 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0520 03:21:10.401195    4584 cache.go:56] Caching tarball of preloaded images
	I0520 03:21:10.401619    4584 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:21:10.404493    4584 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 03:21:10.404493    4584 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0520 03:21:10.484432    4584 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-847500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:21:17.185694   11892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.7221731s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-847500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-847500: (1.7673701s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.77s)

                                                
                                    
x
+
TestBinaryMirror (7.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-061100 --alsologtostderr --binary-mirror http://127.0.0.1:60375 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-061100 --alsologtostderr --binary-mirror http://127.0.0.1:60375 --driver=hyperv: (6.7787172s)
helpers_test.go:175: Cleaning up "binary-mirror-061100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-061100
--- PASS: TestBinaryMirror (7.64s)

                                                
                                    
x
+
TestOffline (300.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-981500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-981500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m13.97332s)
helpers_test.go:175: Cleaning up "offline-docker-981500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-981500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-981500: (47.0166825s)
--- PASS: TestOffline (300.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-363100
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-363100: exit status 85 (202.0039ms)

                                                
                                                
-- stdout --
	* Profile "addons-363100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-363100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:21:31.644341    6128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-363100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-363100: exit status 85 (212.0648ms)

                                                
                                                
-- stdout --
	* Profile "addons-363100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-363100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:21:31.648004    7664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (392.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-363100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-363100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m32.8134092s)
--- PASS: TestAddons/Setup (392.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-363100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-363100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-363100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5cc02160-48b8-4a90-bb10-d420141dc70f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5cc02160-48b8-4a90-bb10-d420141dc70f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0074068s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.3922378s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-363100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0520 03:29:46.812177   12076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-363100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 ip: (2.68891s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.25.240.77
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable ingress-dns --alsologtostderr -v=1: (16.9032288s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable ingress --alsologtostderr -v=1: (22.1856781s)
--- PASS: TestAddons/parallel/Ingress (68.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.4s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cvvhl" [974e73bb-2679-4b4b-ba1b-5b3a6518ee12] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0213975s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-363100
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-363100: (22.3727366s)
--- PASS: TestAddons/parallel/InspektorGadget (27.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 23.9917ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-g6npj" [3ec439f0-a4e2-4503-97f6-11c20480f520] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0237131s
addons_test.go:415: (dbg) Run:  kubectl --context addons-363100 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable metrics-server --alsologtostderr -v=1: (16.6245494s)
--- PASS: TestAddons/parallel/MetricsServer (21.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.5087ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-f4zcs" [8f839153-72eb-4331-b147-6db46c4d13ee] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0085245s
addons_test.go:473: (dbg) Run:  kubectl --context addons-363100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-363100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.2536518s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable helm-tiller --alsologtostderr -v=1: (17.2845282s)
--- PASS: TestAddons/parallel/HelmTiller (30.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (96.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 37.5915ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-363100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-363100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [efb2df64-6a6d-48ac-a555-a8a16f2ce23a] Pending
helpers_test.go:344: "task-pv-pod" [efb2df64-6a6d-48ac-a555-a8a16f2ce23a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [efb2df64-6a6d-48ac-a555-a8a16f2ce23a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.0203924s
addons_test.go:584: (dbg) Run:  kubectl --context addons-363100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-363100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-363100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-363100 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-363100 delete pod task-pv-pod: (2.0459077s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-363100 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-363100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-363100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [217d6448-31a8-4b82-83f0-eedefc0e9126] Pending
helpers_test.go:344: "task-pv-pod-restore" [217d6448-31a8-4b82-83f0-eedefc0e9126] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [217d6448-31a8-4b82-83f0-eedefc0e9126] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0204987s
addons_test.go:626: (dbg) Run:  kubectl --context addons-363100 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-363100 delete pod task-pv-pod-restore: (1.1950218s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-363100 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-363100 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.7299126s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable volumesnapshots --alsologtostderr -v=1: (16.0643819s)
--- PASS: TestAddons/parallel/CSI (96.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-363100 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-363100 --alsologtostderr -v=1: (16.8398776s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-ttnld" [e062129e-fb67-4541-85be-b7d52e92faa6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-ttnld" [e062129e-fb67-4541-85be-b7d52e92faa6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.019465s
--- PASS: TestAddons/parallel/Headlamp (37.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-62drz" [83b7a5f5-1a50-4393-87ed-5c7bfe61c908] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0153619s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-363100
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-363100: (15.7477108s)
--- PASS: TestAddons/parallel/CloudSpanner (20.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (97.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-363100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-363100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b2269209-d27c-4eef-a896-39f0d98d1a24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b2269209-d27c-4eef-a896-39f0d98d1a24] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b2269209-d27c-4eef-a896-39f0d98d1a24] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.0088234s
addons_test.go:891: (dbg) Run:  kubectl --context addons-363100 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 ssh "cat /opt/local-path-provisioner/pvc-94e3d7fc-7011-49fb-8aa0-2b4343d236b6_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 ssh "cat /opt/local-path-provisioner/pvc-94e3d7fc-7011-49fb-8aa0-2b4343d236b6_default_test-pvc/file1": (10.8278779s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-363100 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-363100 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-363100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-363100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.555732s)
--- PASS: TestAddons/parallel/LocalPath (97.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.08s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bvjl2" [bec45794-951c-4586-b0d5-933b3290df13] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0200019s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-363100
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-363100: (17.0532758s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.08s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-t959q" [59c3328f-8e54-4485-b8e7-591911856f14] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0100597s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-363100 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-363100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-363100
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-363100: (42.1166568s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-363100
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-363100: (5.4996518s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-363100
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-363100: (4.9940007s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-363100
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-363100: (2.902191s)
--- PASS: TestAddons/StoppedEnableDisable (55.51s)

                                                
                                    
x
+
TestDockerFlags (406.01s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-752500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-752500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m38.6126348s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-752500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-752500 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.6629465s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-752500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-752500 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.6636009s)
helpers_test.go:175: Cleaning up "docker-flags-752500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-752500
E0520 06:18:04.584065    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-752500: (46.0699328s)
--- PASS: TestDockerFlags (406.01s)

                                                
                                    
x
+
TestForceSystemdFlag (559.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-771300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-771300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (8m28.4611421s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-771300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-771300 ssh "docker info --format {{.CgroupDriver}}": (10.4475145s)
helpers_test.go:175: Cleaning up "force-systemd-flag-771300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-771300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-771300: (40.1613186s)
--- PASS: TestForceSystemdFlag (559.07s)

                                                
                                    
x
+
TestForceSystemdEnv (345.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-419600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0520 06:08:04.582353    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-419600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (4m37.5318496s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-419600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-419600 ssh "docker info --format {{.CgroupDriver}}": (11.1709593s)
helpers_test.go:175: Cleaning up "force-systemd-env-419600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-419600
E0520 06:13:04.578098    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 06:13:28.320458    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-419600: (57.1892678s)
--- PASS: TestForceSystemdEnv (345.89s)

                                                
                                    
x
+
TestErrorSpam/start (17.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run: (5.9346061s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run: (5.8664345s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 start --dry-run: (5.9434027s)
--- PASS: TestErrorSpam/start (17.75s)

                                                
                                    
x
+
TestErrorSpam/status (38.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status: (13.1878546s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status: (12.7304852s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 status: (12.7326797s)
--- PASS: TestErrorSpam/status (38.65s)

                                                
                                    
x
+
TestErrorSpam/pause (23.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause: (8.2323715s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause: (7.7423771s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 pause: (7.9149887s)
--- PASS: TestErrorSpam/pause (23.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (24.07s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause: (8.1453029s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause: (8.0290356s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 unpause: (7.8931025s)
--- PASS: TestErrorSpam/unpause (24.07s)

                                                
                                    
x
+
TestErrorSpam/stop (57.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop
E0520 03:38:04.550221    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop: (34.6159356s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop
E0520 03:38:32.387023    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop: (11.4971412s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-644700 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-644700 stop: (11.1046574s)
--- PASS: TestErrorSpam/stop (57.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\4100\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (248.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-379700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0520 03:43:04.563581    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-379700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m8.5247651s)
--- PASS: TestFunctional/serial/StartWithProxy (248.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (130.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-379700 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-379700 --alsologtostderr -v=8: (2m10.0666889s)
functional_test.go:659: soft start took 2m10.0683806s for "functional-379700" cluster.
--- PASS: TestFunctional/serial/SoftStart (130.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-379700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:3.1: (9.1322633s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:3.3: (8.930758s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cache add registry.k8s.io/pause:latest: (9.0097466s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-379700 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3175447427\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-379700 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3175447427\001: (2.2329217s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache add minikube-local-cache-test:functional-379700
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cache add minikube-local-cache-test:functional-379700: (8.6226646s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache delete minikube-local-cache-test:functional-379700
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-379700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl images: (9.8049643s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.7670969s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.7147986s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:46:28.249060    7580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cache reload: (8.3703856s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.6072588s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 kubectl -- --context functional-379700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (131.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-379700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0520 03:48:04.554351    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 03:49:27.758958    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-379700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m11.7485618s)
functional_test.go:757: restart took 2m11.7492284s for "functional-379700" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (131.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-379700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 logs: (8.7619593s)
--- PASS: TestFunctional/serial/LogsCmd (8.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2918732112\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2918732112\001\logs.txt: (10.9685201s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-379700 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-379700
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-379700: exit status 115 (17.2248106s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.25.247.13:31134 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:50:07.066659    9240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-379700 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.47s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 status: (14.6810974s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.789472s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 status -o json: (13.8691531s)
--- PASS: TestFunctional/parallel/StatusCmd (43.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-379700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-379700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-z7htc" [6d7befa7-5ffd-4534-8c5d-4f9a0c436a5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-z7htc" [6d7befa7-5ffd-4534-8c5d-4f9a0c436a5c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.0232366s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 service hello-node-connect --url: (19.3741284s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.25.247.13:31788
functional_test.go:1671: http://172.25.247.13:31788: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-z7htc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.25.247.13:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.25.247.13:31788
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (38.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6379f4b6-01e2-443a-8b28-13183f7119e2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0189307s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-379700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-379700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-379700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-379700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00eaa50f-aea6-4043-9141-cb03a10583cf] Pending
helpers_test.go:344: "sp-pod" [00eaa50f-aea6-4043-9141-cb03a10583cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00eaa50f-aea6-4043-9141-cb03a10583cf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.0160809s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-379700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-379700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-379700 delete -f testdata/storage-provisioner/pod.yaml: (1.4187526s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-379700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f0a66fc3-d747-4f10-9629-dec1d59b1ca0] Pending
helpers_test.go:344: "sp-pod" [f0a66fc3-d747-4f10-9629-dec1d59b1ca0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f0a66fc3-d747-4f10-9629-dec1d59b1ca0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0087663s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-379700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "echo hello": (10.3646028s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "cat /etc/hostname": (10.0815907s)
--- PASS: TestFunctional/parallel/SSHCmd (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (57.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0056592s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /home/docker/cp-test.txt": (10.4722994s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cp functional-379700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2673888018\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cp functional-379700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd2673888018\001\cp-test.txt: (10.842074s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /home/docker/cp-test.txt": (10.2776393s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0258288s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0520 03:53:04.553084    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh -n functional-379700 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.938521s)
--- PASS: TestFunctional/parallel/CpCmd (57.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (65.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-379700 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-5s7jr" [dc504a0d-25f1-4a40-951e-b625414263f4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-5s7jr" [dc504a0d-25f1-4a40-951e-b625414263f4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.0269486s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;": exit status 1 (410.9507ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;": exit status 1 (407.3939ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;": exit status 1 (484.5961ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;": exit status 1 (1.0083066s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;": exit status 1 (342.624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-379700 exec mysql-64454c8b5c-5s7jr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (65.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/4100/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/test/nested/copy/4100/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/test/nested/copy/4100/hosts": (11.6746698s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (63.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/4100.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/4100.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/4100.pem": (10.6433255s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/4100.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /usr/share/ca-certificates/4100.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /usr/share/ca-certificates/4100.pem": (10.8450078s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.3823076s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/41002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/41002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/41002.pem": (10.4110807s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/41002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /usr/share/ca-certificates/41002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /usr/share/ca-certificates/41002.pem": (10.3976764s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.6912949s)
--- PASS: TestFunctional/parallel/CertSync (63.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-379700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 ssh "sudo systemctl is-active crio": exit status 1 (10.5699561s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:53:06.227179   14880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.3951886s)
--- PASS: TestFunctional/parallel/License (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-379700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-379700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ks68r" [c9fa9a26-2c33-424d-9a4b-4d50bf3256be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ks68r" [c9fa9a26-2c33-424d-9a4b-4d50bf3256be] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0219106s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.5903143s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.4998565s)
functional_test.go:1311: Took "10.5003589s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "186.9173ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 service list: (13.6538445s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.9573705s)
functional_test.go:1362: Took "10.9579078s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "170.1042ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 service list -o json: (14.9681017s)
functional_test.go:1490: Took "14.9685406s" to run "out/minikube-windows-amd64.exe -p functional-379700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.97s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (45.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-379700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-379700"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-379700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-379700": (29.8623941s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-379700 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-379700 docker-env | Invoke-Expression ; docker images": (15.4015387s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (45.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2: (2.7859259s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2: (2.8675623s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 update-context --alsologtostderr -v=2: (2.9406515s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 version -o=json --components: (8.3276774s)
--- PASS: TestFunctional/parallel/Version/components (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls --format short --alsologtostderr: (8.4489537s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-379700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-379700
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-379700
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-379700 image ls --format short --alsologtostderr:
W0520 03:55:17.570541    6548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 03:55:17.577544    6548 out.go:291] Setting OutFile to fd 824 ...
I0520 03:55:17.594534    6548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:17.594534    6548 out.go:304] Setting ErrFile to fd 752...
I0520 03:55:17.594534    6548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:17.626532    6548 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:17.626532    6548 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:17.627533    6548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:20.221460    6548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:20.221739    6548 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:20.241693    6548 ssh_runner.go:195] Run: systemctl --version
I0520 03:55:20.241693    6548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:22.871873    6548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:22.871873    6548 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:22.871873    6548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
I0520 03:55:25.697407    6548 main.go:141] libmachine: [stdout =====>] : 172.25.247.13

                                                
                                                
I0520 03:55:25.697407    6548 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:25.697407    6548 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
I0520 03:55:25.808797    6548 ssh_runner.go:235] Completed: systemctl --version: (5.5670964s)
I0520 03:55:25.822859    6548 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls --format table --alsologtostderr: (8.2265539s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-379700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-379700 | fd53d3efb0f4f | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | e784f4560448b | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/google-containers/addon-resizer      | functional-379700 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-379700 image ls --format table --alsologtostderr:
W0520 03:55:26.008395   14112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 03:55:26.015398   14112 out.go:291] Setting OutFile to fd 1364 ...
I0520 03:55:26.032395   14112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:26.032395   14112 out.go:304] Setting ErrFile to fd 1368...
I0520 03:55:26.032395   14112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:26.048404   14112 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:26.048404   14112 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:26.048404   14112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:28.515859   14112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:28.515859   14112 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:28.535329   14112 ssh_runner.go:195] Run: systemctl --version
I0520 03:55:28.535329   14112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:31.097060   14112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:31.098057   14112 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:31.098146   14112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
I0520 03:55:33.923009   14112 main.go:141] libmachine: [stdout =====>] : 172.25.247.13

                                                
                                                
I0520 03:55:33.923009   14112 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:33.923129   14112 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
I0520 03:55:34.023619   14112 ssh_runner.go:235] Completed: systemctl --version: (5.4882831s)
I0520 03:55:34.034078   14112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls --format json --alsologtostderr: (8.3376447s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-379700 image ls --format json --alsologtostderr:
[{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"fd53d3efb0f4fba37c2caecba91e48d47c9624c2b35d722c5eb31d47e2081f39","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-379700"],"size":"30"},{"id":"501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"a52dc94f0a91256bde86a1c3
027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-379700"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repo
Digests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-379700 image ls --format json --alsologtostderr:
W0520 03:55:25.993399    4540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 03:55:26.000401    4540 out.go:291] Setting OutFile to fd 1320 ...
I0520 03:55:26.001521    4540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:26.001521    4540 out.go:304] Setting ErrFile to fd 716...
I0520 03:55:26.001521    4540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:26.020392    4540 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:26.021402    4540 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:26.021402    4540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:28.517363    4540 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:28.517468    4540 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:28.542563    4540 ssh_runner.go:195] Run: systemctl --version
I0520 03:55:28.542682    4540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:31.124836    4540 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:31.124836    4540 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:31.124940    4540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
I0520 03:55:33.997196    4540 main.go:141] libmachine: [stdout =====>] : 172.25.247.13

                                                
                                                
I0520 03:55:33.997196    4540 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:33.997196    4540 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
I0520 03:55:34.115047    4540 ssh_runner.go:235] Completed: systemctl --version: (5.572423s)
I0520 03:55:34.128260    4540 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls --format yaml --alsologtostderr: (8.4086507s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-379700 image ls --format yaml --alsologtostderr:
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-379700
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: fd53d3efb0f4fba37c2caecba91e48d47c9624c2b35d722c5eb31d47e2081f39
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-379700
size: "30"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-379700 image ls --format yaml --alsologtostderr:
W0520 03:55:17.568547   14952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 03:55:17.576535   14952 out.go:291] Setting OutFile to fd 1364 ...
I0520 03:55:17.577544   14952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:17.577544   14952 out.go:304] Setting ErrFile to fd 1368...
I0520 03:55:17.577544   14952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:17.596536   14952 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:17.597538   14952 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:17.598582   14952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:20.164721   14952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:20.164776   14952 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:20.178916   14952 ssh_runner.go:195] Run: systemctl --version
I0520 03:55:20.178916   14952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:22.829917   14952 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:22.829989   14952 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:22.829989   14952 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
I0520 03:55:25.675475   14952 main.go:141] libmachine: [stdout =====>] : 172.25.247.13

                                                
                                                
I0520 03:55:25.675475   14952 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:25.675475   14952 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
I0520 03:55:25.776927   14952 ssh_runner.go:235] Completed: systemctl --version: (5.5980042s)
I0520 03:55:25.791851   14952 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-379700 ssh pgrep buildkitd: exit status 1 (10.8334205s)

                                                
                                                
** stderr ** 
	W0520 03:55:17.570541    9960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image build -t localhost/my-image:functional-379700 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image build -t localhost/my-image:functional-379700 testdata\build --alsologtostderr: (10.39167s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-379700 image build -t localhost/my-image:functional-379700 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 0663326d63dc
---> Removed intermediate container 0663326d63dc
---> 9f3b023fbff6
Step 3/3 : ADD content.txt /
---> 0cc1cb592a82
Successfully built 0cc1cb592a82
Successfully tagged localhost/my-image:functional-379700
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-379700 image build -t localhost/my-image:functional-379700 testdata\build --alsologtostderr:
W0520 03:55:28.413107    7764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0520 03:55:28.421097    7764 out.go:291] Setting OutFile to fd 1292 ...
I0520 03:55:28.437230    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:28.437230    7764 out.go:304] Setting ErrFile to fd 1240...
I0520 03:55:28.437230    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:55:28.460093    7764 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:28.481990    7764 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:55:28.482052    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:31.019322    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:31.019436    7764 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:31.043992    7764 ssh_runner.go:195] Run: systemctl --version
I0520 03:55:31.044123    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-379700 ).state
I0520 03:55:33.462144    7764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0520 03:55:33.462144    7764 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:33.463653    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-379700 ).networkadapters[0]).ipaddresses[0]
I0520 03:55:36.148013    7764 main.go:141] libmachine: [stdout =====>] : 172.25.247.13

                                                
                                                
I0520 03:55:36.148013    7764 main.go:141] libmachine: [stderr =====>] : 
I0520 03:55:36.148157    7764 sshutil.go:53] new ssh client: &{IP:172.25.247.13 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-379700\id_rsa Username:docker}
I0520 03:55:36.252136    7764 ssh_runner.go:235] Completed: systemctl --version: (5.2079876s)
I0520 03:55:36.252136    7764 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.747857378.tar
I0520 03:55:36.265997    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 03:55:36.299966    7764 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.747857378.tar
I0520 03:55:36.307698    7764 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.747857378.tar: stat -c "%s %y" /var/lib/minikube/build/build.747857378.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.747857378.tar': No such file or directory
I0520 03:55:36.307698    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.747857378.tar --> /var/lib/minikube/build/build.747857378.tar (3072 bytes)
I0520 03:55:36.365276    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.747857378
I0520 03:55:36.401269    7764 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.747857378 -xf /var/lib/minikube/build/build.747857378.tar
I0520 03:55:36.426308    7764 docker.go:360] Building image: /var/lib/minikube/build/build.747857378
I0520 03:55:36.435262    7764 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-379700 /var/lib/minikube/build/build.747857378
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0520 03:55:38.567450    7764 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-379700 /var/lib/minikube/build/build.747857378: (2.1321852s)
I0520 03:55:38.580259    7764 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.747857378
I0520 03:55:38.633846    7764 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.747857378.tar
I0520 03:55:38.653725    7764 build_images.go:217] Built localhost/my-image:functional-379700 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.747857378.tar
I0520 03:55:38.654377    7764 build_images.go:133] succeeded building to: functional-379700
I0520 03:55:38.654377    7764 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.4408749s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.9073902s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-379700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14260: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 15348: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr: (17.7112279s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.8486512s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-379700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9ae53b34-9a68-48a8-a439-d91b0e06dff9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9ae53b34-9a68-48a8-a439-d91b0e06dff9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0119181s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-379700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 15140: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr: (11.9009237s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.4596138s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.3722158s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-379700
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image load --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr: (13.7245862s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.4568736s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image save gcr.io/google-containers/addon-resizer:functional-379700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image save gcr.io/google-containers/addon-resizer:functional-379700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.9440503s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image rm gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image rm gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr: (7.5559691s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.4982847s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.6548263s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image ls: (7.4370109s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-379700
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-379700 image save --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-379700 image save --daemon gcr.io/google-containers/addon-resizer:functional-379700 --alsologtostderr: (9.1783631s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-379700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-379700
--- PASS: TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-379700
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-379700
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (729.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-291700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0520 03:58:04.562965    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 04:00:25.056419    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.071419    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.086454    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.117667    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.165336    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.257478    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.428656    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:25.759805    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:26.401576    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:27.686457    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:30.261134    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:35.387410    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:00:45.642439    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:01:06.134324    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:01:47.105899    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:03:04.552173    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 04:03:09.037101    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:05:25.054745    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:05:52.884905    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:06:07.773776    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 04:08:04.552568    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-291700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m30.6698082s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr: (38.4615858s)
--- PASS: TestMultiControlPlane/serial/StartCluster (729.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (15.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- rollout status deployment/busybox: (7.3300946s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- nslookup kubernetes.io: (1.9053915s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- nslookup kubernetes.io: (1.4867783s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-bghlc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-mw76w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-291700 -- exec busybox-fc5497c4f-qxg28 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (15.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (266.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-291700 -v=7 --alsologtostderr
E0520 04:13:04.566535    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-291700 -v=7 --alsologtostderr: (3m34.3861188s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 status -v=7 --alsologtostderr: (52.0343415s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (266.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-291700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (30.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0520 04:15:25.059247    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (30.3016607s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (30.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (669.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 status --output json -v=7 --alsologtostderr: (51.5019163s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700:/home/docker/cp-test.txt: (10.1519212s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt"
E0520 04:16:48.251748    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt": (10.1009395s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700.txt: (10.1500866s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt": (10.1474123s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700_ha-291700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700_ha-291700-m02.txt: (17.7201537s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt": (10.1015921s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m02.txt": (10.1433303s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700_ha-291700-m03.txt
E0520 04:18:04.567162    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700_ha-291700-m03.txt: (17.6476542s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt": (10.0787275s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m03.txt": (10.0715373s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700_ha-291700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700_ha-291700-m04.txt: (17.6675214s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test.txt": (10.0724519s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700_ha-291700-m04.txt": (10.1720586s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m02:/home/docker/cp-test.txt: (10.1202429s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt": (10.1295728s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m02.txt: (10.1906551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt": (10.1357119s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m02_ha-291700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m02_ha-291700.txt: (17.8320896s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt": (10.1834439s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700.txt"
E0520 04:20:25.056118    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700.txt": (10.1361429s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700-m02_ha-291700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700-m02_ha-291700-m03.txt: (17.6641647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt": (10.1802647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700-m03.txt": (10.1541435s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700-m02_ha-291700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m02:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700-m02_ha-291700-m04.txt: (17.6808215s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test.txt": (10.2627407s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700-m02_ha-291700-m04.txt": (10.1259384s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m03:/home/docker/cp-test.txt: (10.033209s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt": (10.1779384s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m03.txt: (10.1000474s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt": (10.054973s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m03_ha-291700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m03_ha-291700.txt: (17.8638679s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt"
E0520 04:22:47.777598    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt": (9.9692075s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700.txt": (10.1524114s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt
E0520 04:23:04.557410    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt: (17.7394409s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt": (10.1000466s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700-m02.txt": (10.1438696s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m03:/home/docker/cp-test.txt ha-291700-m04:/home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt: (17.6560697s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test.txt": (10.1893433s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test_ha-291700-m03_ha-291700-m04.txt": (10.1816517s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp testdata\cp-test.txt ha-291700-m04:/home/docker/cp-test.txt: (10.1173953s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt": (10.1464148s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4105416774\001\cp-test_ha-291700-m04.txt: (10.0752356s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt": (10.0636602s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m04_ha-291700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700:/home/docker/cp-test_ha-291700-m04_ha-291700.txt: (17.8509131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt"
E0520 04:25:25.055407    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt": (10.0942862s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700.txt": (10.2141s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700-m02:/home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt: (17.6951279s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt": (10.1576894s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m02 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700-m02.txt": (10.3271055s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 cp ha-291700-m04:/home/docker/cp-test.txt ha-291700-m03:/home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt: (17.7799158s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m04 "sudo cat /home/docker/cp-test.txt": (10.1546399s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-291700 ssh -n ha-291700-m03 "sudo cat /home/docker/cp-test_ha-291700-m04_ha-291700-m03.txt": (10.0717788s)
--- PASS: TestMultiControlPlane/serial/CopyFile (669.67s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (204.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-977200 --driver=hyperv
E0520 04:33:04.569402    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
E0520 04:33:28.258986    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-977200 --driver=hyperv: (3m24.3614474s)
--- PASS: TestImageBuild/serial/Setup (204.36s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-977200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-977200: (9.8840818s)
--- PASS: TestImageBuild/serial/NormalBuild (9.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-977200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-977200: (9.3613333s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-977200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-977200: (8.2128609s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-977200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-977200: (7.9438375s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (218.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-395900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0520 04:35:25.053129    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:38:04.566731    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-395900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m38.1569611s)
--- PASS: TestJSONOutput/start/Command (218.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.24s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-395900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-395900 --output=json --user=testUser: (8.2356184s)
--- PASS: TestJSONOutput/pause/Command (8.24s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.03s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-395900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-395900 --output=json --user=testUser: (8.0338215s)
--- PASS: TestJSONOutput/unpause/Command (8.03s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-395900 --output=json --user=testUser
E0520 04:39:27.786616    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-395900 --output=json --user=testUser: (35.6210224s)
--- PASS: TestJSONOutput/stop/Command (35.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.37s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-753700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-753700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (197.3153ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"434eaed6-e4ff-47f2-9bde-86d1335b7f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-753700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e1b0560-28eb-4925-8cbb-1069f9649259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8f44910e-d04e-41f7-bed5-069080a3f3e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49c7f707-b7a5-456c-a9bc-21c1452a1f2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2c0734cf-a5b2-42fe-8da6-a37166ad4b01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"ef1c3e11-7aeb-4d71-9b7b-8fab1f5476d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"91b17949-5d15-4f25-aa70-eccbeb269437","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 04:39:58.827224    3084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-753700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-753700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-753700: (1.173012s)
--- PASS: TestErrorJSONOutput (1.37s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (542.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-701000 --driver=hyperv
E0520 04:40:25.059304    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:43:04.566074    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-701000 --driver=hyperv: (3m25.6903317s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-701000 --driver=hyperv
E0520 04:45:25.061668    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-701000 --driver=hyperv: (3m28.1285454s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-701000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (20.2905202s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-701000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (20.4247321s)
helpers_test.go:175: Cleaning up "second-701000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-701000
E0520 04:48:04.559297    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-701000: (41.4600856s)
helpers_test.go:175: Cleaning up "first-701000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-701000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-701000: (46.120952s)
--- PASS: TestMinikubeProfile (542.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (162.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-859800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0520 04:50:08.266356    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 04:50:25.051897    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-859800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m41.9786361s)
--- PASS: TestMountStart/serial/StartWithMountFirst (162.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-859800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-859800 ssh -- ls /minikube-host: (10.0623021s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (163s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-931300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0520 04:53:04.559392    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-931300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m41.9999687s)
--- PASS: TestMountStart/serial/StartWithMountSecond (163.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host: (9.9507791s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.95s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (32.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-859800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-859800 --alsologtostderr -v=5: (32.1454599s)
--- PASS: TestMountStart/serial/DeleteFirst (32.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.85s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host
E0520 04:55:25.056835    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host: (9.8516473s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.85s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.74s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-931300
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-931300: (27.7372731s)
--- PASS: TestMountStart/serial/Stop (27.74s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (124.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-931300
E0520 04:56:07.793477    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-931300: (2m3.1188912s)
--- PASS: TestMountStart/serial/RestartStopped (124.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host
E0520 04:58:04.564488    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-931300 ssh -- ls /minikube-host: (9.9389537s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-093300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.3399116s)
--- PASS: TestMultiNode/serial/ProfileList (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-509600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-509600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (273.5671ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-509600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 05:47:21.847417    9056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (860.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3939720197.exe start -p stopped-upgrade-730100 --memory=2200 --vm-driver=hyperv
E0520 05:53:04.574452    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3939720197.exe start -p stopped-upgrade-730100 --memory=2200 --vm-driver=hyperv: (5m55.5772721s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3939720197.exe -p stopped-upgrade-730100 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3939720197.exe -p stopped-upgrade-730100 stop: (36.5593935s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-730100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0520 06:00:25.061529    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-730100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m48.2938263s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (860.43s)

                                                
                                    
x
+
TestPause/serial/Start (499.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-325200 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0520 05:56:48.313836    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
E0520 05:58:04.571094    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-363100\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-325200 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (8m19.4557551s)
--- PASS: TestPause/serial/Start (499.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (300.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-325200 --alsologtostderr -v=1 --driver=hyperv
E0520 06:05:25.070563    4100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-379700\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-325200 --alsologtostderr -v=1 --driver=hyperv: (5m0.8195405s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (300.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-730100
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-730100: (10.8903081s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.89s)

                                                
                                    
x
+
TestPause/serial/Pause (10.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-325200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-325200 --alsologtostderr -v=5: (10.0292473s)
--- PASS: TestPause/serial/Pause (10.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (17.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-325200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-325200 --output=json --layout=cluster: exit status 2 (17.5856057s)

                                                
                                                
-- stdout --
	{"Name":"pause-325200","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-325200","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 06:10:11.849028    4060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (17.59s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-325200 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-325200 --alsologtostderr -v=5: (8.7043447s)
--- PASS: TestPause/serial/Unpause (8.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.44s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-325200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-325200 --alsologtostderr -v=5: (8.4362943s)
--- PASS: TestPause/serial/PauseAgain (8.44s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (46.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-325200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-325200 --alsologtostderr -v=5: (46.9499318s)
--- PASS: TestPause/serial/DeletePaused (46.95s)

                                                
                                    

Test skip (30/205)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-379700 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-379700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4124: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-379700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-379700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.031418s)

                                                
                                                
-- stdout --
	* [functional-379700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:51:09.050752    1336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 03:51:09.053255    1336 out.go:291] Setting OutFile to fd 576 ...
	I0520 03:51:09.054931    1336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:51:09.055529    1336 out.go:304] Setting ErrFile to fd 812...
	I0520 03:51:09.055688    1336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:51:09.090266    1336 out.go:298] Setting JSON to false
	I0520 03:51:09.095942    1336 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2265,"bootTime":1716200003,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:51:09.096014    1336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:51:09.098894    1336 out.go:177] * [functional-379700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:51:09.103344    1336 notify.go:220] Checking for updates...
	I0520 03:51:09.105431    1336 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:51:09.107990    1336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:51:09.112989    1336 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:51:09.119005    1336 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:51:09.122974    1336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:51:09.128983    1336 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:51:09.129983    1336 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-379700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-379700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0331007s)

                                                
                                                
-- stdout --
	* [functional-379700] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0520 03:51:04.036737    1700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0520 03:51:04.037739    1700 out.go:291] Setting OutFile to fd 576 ...
	I0520 03:51:04.038731    1700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:51:04.038731    1700 out.go:304] Setting ErrFile to fd 812...
	I0520 03:51:04.038731    1700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:51:04.070759    1700 out.go:298] Setting JSON to false
	I0520 03:51:04.076199    1700 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2260,"bootTime":1716200003,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0520 03:51:04.076199    1700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:51:04.080937    1700 out.go:177] * [functional-379700] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0520 03:51:04.085538    1700 notify.go:220] Checking for updates...
	I0520 03:51:04.088708    1700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0520 03:51:04.094713    1700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:51:04.097712    1700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0520 03:51:04.100713    1700 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:51:04.103719    1700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:51:04.108716    1700 config.go:182] Loaded profile config "functional-379700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:51:04.109732    1700 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard